0% found this document useful (0 votes)
13 views64 pages

Latex_code_Interview Preparation Final Report

This document presents a project report on an Interview Preparation Model that integrates contextual and confidence analysis to enhance candidates' performance in interviews. The model utilizes Natural Language Processing techniques to assess the emotional tone and contextual dynamics of interviews, aiming to improve candidates' communication skills and confidence. It is submitted as part of the requirements for a Bachelor of Engineering degree at Savitribai Phule Pune University.

Uploaded by

riteshakhade1234
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views64 pages

Latex_code_Interview Preparation Final Report

This document presents a project report on an Interview Preparation Model that integrates contextual and confidence analysis to enhance candidates' performance in interviews. The model utilizes Natural Language Processing techniques to assess the emotional tone and contextual dynamics of interviews, aiming to improve candidates' communication skills and confidence. It is submitted as part of the requirements for a Bachelor of Engineering degree at Savitribai Phule Pune University.

Uploaded by

riteshakhade1234
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 64

A PRELIMINARY PROJECT REPORT ON

INTERVIEW PREPARATION MODEL


WITH CONTEXT AND CONFIDENCE
ANALYSIS
SUBMITTED TO THE SAVITRIBAI PHULE PUNE UNIVERSITY IN THE
PRELIMINARY FULFILLMENT OF THE REQUIREMENTS FOR THE AWARD
OF THE DEGREE.

BACHELOR OF ENGINEERING (COMPUTER ENGINEERING )

SUBMITTED BY

Akhade Ritesh S. Seat No: B401170126


Rathod Niranjan S. Seat No: B401170183
Wagh Ashutosh V. Seat No: B401170194
Jadhav Gaurav C. Seat No: B401170152

Under The Guidance of


Dr. M. T. JAGTAP

Sanghavi College of Engineering


Nashik

DEPARTMENT OF COMPUTER ENGINEERING


Sanghavi College Of Engineering,
Nashik. 422202
2024-2025.
Sanghavi College Of Engineering, Nashik
DEPARTMENT OF COMPUTER ENGINEERING
2024–2025
CERTIFICATE
This is to certify that the Project entitled

INTERVIEW PREPARATION MODEL


WITH CONTEXT AND CONFIDENCE
ANALYSIS

SUBMITTED BY

Akhade Ritesh S. Seat No: B401170126


Rathod Niranjan S. Seat No: B401170183
Wagh Ashutosh V. Seat No: B401170194
Jadhav Gaurav C. Seat No: B401170152

Is a bonafide work carried out by Students under the supervision of Dr. M.


T. Jagtap and it is submitted towards the partial fulfillment of the requirement of
Bachelor of Engineering (Computer Engineering) Project.

Dr. M. T. Jagtap Dr. M. T. Jagtap Prof. P. Biswas


Internal Guide H.O.D Principal
Dept. of Computer Engg. Dept. of Computer Engg. Dept. of Computer Engg.

External Examiner
ACKNOWLEDGEMENT

We take this opportunity to express our heartfelt thanks to all those who
contributed to the completion of the project and seminar on “Interview Prepara-
tion Model with Context ,Confidence And Sentiment Analysis.”
We express our deep sense of gratitude to our project guide, Dr. M. T. Jagtap,
Department of Computer Engineering, Sanghavi College of Engineering, Nashik, for
his valuable guidance and continuous motivation. We gratefully acknowledge his
support on many occasions for the improvement of this seminar.
We also extend our sincere thanks to Dr. M. T. Jagtap, Head of the Computer
Engineering Department, for permitting us to use departmental facilities and for his
constant encouragement.
Lastly, we would like to express our gratitude to all staff members of the Computer
Department, as well as to our family and friends, for their unwavering support and
cooperation during the compilation of this report.

Student Names:
Niranjan Rathod
Ritesh Akhade
Ashutosh Wagh
Gaurav Jadhav
ABSTRACT

The ability to prepare effectively for an interview plays a pivotal role in a candi-
date’s success, as it influences both performance and perception. Traditional in-
terview preparation methods focus on assessing technical skills, reviewing common
questions, and practicing responses. However, a more comprehensive approach should
also consider the contextual dynamics of an interview, such as tone, environment, and
underlying confidence between the interviewer and the candidate.
This project proposes an Interview Preparation Model (IPM) that incorporates
both contextual analysis and confidence evaluation to better prepare candidates for
interview scenarios. The model leverages advanced Natural Language Processing
(NLP) techniques to analyze both the historical context of the company and the
emotional tone of the interview.
Contextual analysis involves interpreting company-specific language, culture, and
expectations to help candidates tailor their responses. Confidence analysis evaluates
tone and delivery to guide candidates in adjusting their answers accordingly.
By integrating these layers of analysis, the model helps candidates understand
not just what to say but also how to say it, leading to more effective and authentic
interview interactions.
Keywords: Sentiment Analysis, Context Analysis, Confidence Analysis, BERT,
Whisper, Machine Learning, Gradio, User Interface.
Contents

TITLE PAGE

ACKNOWLEDGEMENT

ABSTRACT

1 Introduction 1
1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Problem Definition and Objectives . . . . . . . . . . . . . . . . . . . 3
1.4 Project Scope and Limitations . . . . . . . . . . . . . . . . . . . . . . 3
1.5 Methodologies of Problem Solving . . . . . . . . . . . . . . . . . . . . 4

2 LITERATURE SURVEY 5
2.1 Literature Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

3 Project Plan 8
3.0.1 Project Estimate . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.1 Project Phases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.1.1 Risk Management . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.1.2 Project Schedule . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.1.3 Team Organization . . . . . . . . . . . . . . . . . . . . . . . . 11
3.2 Tools and Technologies . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.3 Risk Analysis and Mitigation . . . . . . . . . . . . . . . . . . . . . . 12
3.4 Team Roles and Responsibilities . . . . . . . . . . . . . . . . . . . . . 12
3.5 Deliverables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4 SOFTWARE REQUIREMENT SPECIFICATION 14
4.1 Assumptions and Dependencies . . . . . . . . . . . . . . . . . . . . . 14
4.2 Functional Requirements . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.2.1 System Feature 1: Speech Input and Transcription . . . . . . 14
4.2.2 System Feature 2: Contextual Analysis . . . . . . . . . . . . . 14
4.2.3 System Feature 3: Sentiment Analysis . . . . . . . . . . . . . 15
4.2.4 System Feature 4: Confidence Analysis . . . . . . . . . . . . . 15
4.2.5 System Feature 5: Feedback Display . . . . . . . . . . . . . . 15
4.3 External Interface Requirements . . . . . . . . . . . . . . . . . . . . . 15
4.3.1 User Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.3.2 Hardware Interfaces . . . . . . . . . . . . . . . . . . . . . . . . 15
4.3.3 Software Interfaces . . . . . . . . . . . . . . . . . . . . . . . . 16
4.3.4 Communication Interfaces . . . . . . . . . . . . . . . . . . . . 16
4.4 Nonfunctional Requirements . . . . . . . . . . . . . . . . . . . . . . . 16
4.4.1 Performance Requirements . . . . . . . . . . . . . . . . . . . . 16
4.4.2 Safety Requirements . . . . . . . . . . . . . . . . . . . . . . . 16
4.4.3 Security Requirements . . . . . . . . . . . . . . . . . . . . . . 16
4.4.4 Software Quality Attributes . . . . . . . . . . . . . . . . . . . 17
4.5 System Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.5.1 Database Requirements . . . . . . . . . . . . . . . . . . . . . . 17
4.5.2 Software Requirements (Platform Choice) . . . . . . . . . . . 17
4.5.3 Hardware Requirements . . . . . . . . . . . . . . . . . . . . . 17

5 SYSTEM DESIGN 18
5.1 System Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
5.2 Mathematical Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
5.3 Data Flow Diagrams (DFD) . . . . . . . . . . . . . . . . . . . . . . . 20
5.4 Entity Relationship Diagram (ERD) . . . . . . . . . . . . . . . . . . 21
5.5 UML Diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

6 PROJECT IMPLEMENTATION 25
6.0.1 Module-Wise Implementation . . . . . . . . . . . . . . . . . . 25
6.0.2 Testing During Implementation . . . . . . . . . . . . . . . . . 27
6.0.3 Sample Output . . . . . . . . . . . . . . . . . . . . . . . . . . 28
7 SOFTWARE TESTING 29
7.1 Software Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
7.1.1 Types of Testing . . . . . . . . . . . . . . . . . . . . . . . . . 29
7.1.2 Test Cases and Test Results . . . . . . . . . . . . . . . . . . . 31

8 RESULTS AND EVALUATION 32


8.1 Result Screenshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

9 CONCLUSION 37
9.0.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
9.0.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
9.0.3 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

10 REFERENCES 39
Appendix-A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Appendix-B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Appendix-C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
List of Figures

5.1 System Architecture Flowchart . . . . . . . . . . . . . . . . . . . . . 18


5.2 Data Flow Diagram (DFD) . . . . . . . . . . . . . . . . . . . . . . . 20
5.3 Entity Relationship Diagram (ERD) . . . . . . . . . . . . . . . . . . 21
5.4 Use Case Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
5.5 Activity Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
5.6 Sequence Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

8.1 user interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32


8.2 positive analysis - answer is contextually relevant and confident . . . 33
8.3 negative analysis - answer is contextually irrelevant and not confident 34
8.4 unconfident but relevant . . . . . . . . . . . . . . . . . . . . . . . . . 35
8.5 sentiment analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
List of Tables

3.1 Project Phases and Timeline . . . . . . . . . . . . . . . . . . . . . . . 9


3.2 Project Timeline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.3 Risk Analysis and Mitigation . . . . . . . . . . . . . . . . . . . . . . 12
3.4 Team Roles and Responsibilities . . . . . . . . . . . . . . . . . . . . . 12

7.1 Test Cases and Test Results for Interview Preparation Model . . . . . 31
Department of Computer Engineering SCOE Nashik.

Chapter 1

Introduction

In today’s competitive job market, excelling in interviews is crucial for securing em-
ployment opportunities. While academic qualifications and technical skills play a
significant role, the ability to communicate effectively, remain confident under pres-
sure, and express relevant answers within context is equally important. Traditional
interview preparation methods, such as mock interviews or coaching sessions, often
lack personalized feedback, making it difficult for candidates to identify and improve
upon their weaknesses.
This project proposes an AI-based Interview Preparation Model that integrates
Contextual Analysis, Sentiment Detection, and Confidence Scoring to provide com-
prehensive and intelligent feedback on users’ interview responses. By leveraging state-
of-the-art Natural Language Processing (NLP) techniques, including BERT (Bidirec-
tional Encoder Representations from Transformers), and speech recognition systems
like Whisper, the model processes spoken or written responses to interview questions
and evaluates them on multiple dimensions.
The key innovation of this model lies in its ability to analyze not only the relevance
of the content (context) but also the emotional tone (sentiment) and the level of
confidence conveyed in the response. This enables users to receive targeted feedback
and improve both their verbal and non-verbal communication skills.
Furthermore, the system features a user-friendly interface, making it accessible to
candidates from diverse educational and professional backgrounds. The integration
of AI-driven insights with intuitive design offers a modern, scalable, and effective
approach to interview preparation.

1
Department of Computer Engineering SCOE Nashik.
Department of Computer Engineering SCOE Nashik.

1.1 Overview

The proposed Interview Preparation Model is an AI-powered system designed to


simulate real interview scenarios and provide analytical feedback on a candidate’s
responses. It utilizes advanced Natural Language Processing (NLP) and Machine
Learning (ML) techniques to assess the contextual relevance, emotional sentiment,
and confidence level of spoken or written answers. This intelligent system is particu-
larly beneficial for candidates who wish to practice and refine their interviewing skills
independently and efficiently.
The model operates by capturing user responses through voice or text input. Voice
inputs are first transcribed into text using Automatic Speech Recognition (ASR)
technology, specifically Whisper, an open-source model by OpenAI. Once the text
is available, it is processed through a fine-tuned BERT model for semantic and sen-
timent analysis. Simultaneously, features like tone, pitch, and speech rate are used
to calculate a confidence score, offering insights into how assertively the candidate
communicated.
The results are displayed through an interactive user interface that highlights
strengths and suggests improvements. Each answer is broken down into three key
components:
• Contextual Accuracy :
How relevant the answer is to the asked question.
• Sentiment :
The emotional tone (positive, negative, or neutral) of the response.
• Confidence :
A numerical or categorical representation of the speaker’s confidence.
This holistic evaluation provides a unique advantage over traditional preparation
methods by offering data-driven, personalized feedback. The system is scalable, can
be expanded to include different types of interviews (HR, technical, behavioral), and
can be integrated into mobile or web platforms for broader accessibility.

1.2 Motivation

Interview preparation typically involves mock sessions and human feedback, which
may not always be timely, consistent, or accessible. Furthermore, candidates often

2
Department of Computer Engineering SCOE Nashik.
Department of Computer Engineering SCOE Nashik.

lack insight into their non-verbal communication and tone, which play a vital role in
creating a positive impression. The motivation behind this project is to develop an
intelligent, automated system that not only simulates interview conditions but also
analyzes verbal responses on three critical fronts context alignment, emotional senti-
ment, and confidence level. This model aims to empower users to practice anytime,
receive real-time feedback, and build confidence systematically.

1.3 Problem Definition and Objectives

Problem Definition

Candidates preparing for interviews face challenges in identifying weaknesses in their


responses, particularly in aligning their answers with the question, as well as in tone
and delivery. Manual evaluations are subjective and not scalable.

Objectives

• To develop an AI-powered system that transcribes spoken responses using speech-


to-text conversion.

• To analyze the contextual accuracy of answers using NLP-based similarity


metrics.

• To perform sentiment analysis to understand the emotional tone of the re-


sponse.

• To evaluate the confidence level based on vocal features like pitch, tone, and
fluency.

• To provide users with actionable feedback for improvement.

• To design a simple and intuitive user interface for ease of use.

1.4 Project Scope and Limitations

Scope

• Development of a web or mobile-based platform for interview practice.

3
Department of Computer Engineering SCOE Nashik.
Department of Computer Engineering SCOE Nashik.

• Integration of speech recognition (e.g., Whisper) for transcription.

• Use of pretrained models (e.g., BERT) for context analysis.

• Incorporation of sentiment analysis tools and prosody-based confidence assess-


ment.

• Feedback generation in real-time or post-response.

Limitations

• Accuracy of transcription may vary with background noise or accents.

• Contextual evaluation is dependent on the dataset and fine-tuning quality.

• Sentiment and confidence detection are inherently probabilistic and may misin-
terpret in edge cases.

• The system cannot yet account for non-verbal cues like facial expressions or
body language.

1.5 Methodologies of Problem Solving

• Speech Recognition: Use of models like Whisper or Google Speech-to-Text


to transcribe spoken answers into text.

• Contextual Analysis: Fine-tuned BERT model used to compute semantic


similarity between the user’s answer and an ideal response.

• Sentiment Analysis: Application of sentiment classification techniques (e.g.,


VADER, TextBlob) to determine the positivity, negativity, or neutrality of the
response.

• Confidence Analysis: Utilization of acoustic features (e.g., pause rate, vol-


ume, pitch variation) to infer speaker confidence.

• User Interface: Development of an intuitive frontend (e.g., with Gradio or


Android) to capture responses and display analysis feedback.

4
Department of Computer Engineering SCOE Nashik.
Department of Computer Engineering SCOE Nashik.

Chapter 2

LITERATURE SURVEY

2.1 Literature Summary

A literature survey was conducted to study existing systems, tools, and research pa-
pers related to interview preparation, natural language processing, sentiment analysis,
and confidence estimation. The goal was to identify the limitations of current systems
and find opportunities to improve the user experience using AI techniques.

1. Interview Preparation Tools

Several platforms, such as Pramp, InterviewBuddy, and Glassdoor, provide mock in-
terviews and preparation tips. However, these platforms mainly offer question banks
and human-led interviews. They lack AI-driven feedback and do not assess the can-
didate’s tone, emotion, or confidence level in real time.
Limitation Identified: Manual feedback is time-consuming and subjective, and
such platforms are not scalable or available 24/7.

2. NLP for Contextual Analysis

Research in NLP has shown that BERT (Bidirectional Encoder Representations from
Transformers) is highly effective in understanding sentence semantics and context.
Papers such as “BERT: Pre-training of Deep Bidirectional Transformers for Language
Understanding” (Devlin et al., 2019) demonstrated that BERT outperforms earlier
models in tasks like question answering and textual similarity.
Application in Our Project: BERT is used to determine how well a candidate’s
response matches the intended meaning of the interview question.

5
Department of Computer Engineering SCOE Nashik.
Department of Computer Engineering SCOE Nashik.

3. Sentiment Analysis

Numerous models like TextBlob, VADER, and fine-tuned BERT classifiers have been
used in customer review analysis, social media monitoring, and feedback systems.
Research such as “Deep Learning for Sentiment Analysis: A Survey” outlines the
effectiveness of using deep learning models over traditional lexicon-based approaches.
Relevance: Sentiment analysis in interviews helps identify if the response sounds
negative, neutral, or positive, which impacts the overall impression.

4. Confidence Detection from Speech

Academic studies have explored extracting prosodic features (e.g., pitch, tone, pauses)
and textual cues (e.g., hesitation words) to estimate speaker confidence. Papers
such as “Predicting Confidence Scores from Speech Using Machine Learning” and
“Speech-Based Emotion and Confidence Estimation for Interview Coaching” have
shown promising results.
Approach Used: In our system, vocal signals are analyzed to derive confidence
scores, combined with linguistic analysis for greater accuracy.

5. Voice Recognition and ASR Models

Whisper by OpenAI is a recent, robust speech-to-text model trained on diverse mul-


tilingual and multitask datasets. It has demonstrated high accuracy even in noisy
conditions and with varying accents.
Integration in Our Project: Whisper is used to convert spoken answers into text,
enabling real-time feedback on voice input.

Key Literature Sources

[1] Devlin et al. (2019) – BERT: Pre-training of Deep Bidirectional


Transformers for Language Understanding
This paper introduces BERT, a groundbreaking NLP model that uses bidirectional
transformers to understand context in language. It has significantly improved perfor-
mance in various NLP tasks like question answering and sentence classification.

[2] Radford et al. (2022) – Whisper: Robust Speech Recognition via Large-
Scale Weak Supervision

6
Department of Computer Engineering SCOE Nashik.
Department of Computer Engineering SCOE Nashik.

Whisper is a powerful ASR model by OpenAI trained on diverse audio data. It tran-
scribes spoken language into text accurately, making it highly effective for converting
verbal interview responses into text for analysis.

[3] Poria et al. (2023) – A Review of Affective Computing


This paper reviews affective computing and its application in emotion detection and
sentiment analysis. It discusses how machines can interpret human emotions, aiding
in understanding interviewee stress or confidence levels.

[4] Liu (2022) – Sentiment Analysis and Opinion Mining


Liu provides a comprehensive overview of sentiment analysis techniques, from rule-
based methods to machine learning. It’s essential for extracting opinions and emo-
tional cues from textual responses in interview scenarios.

[5] Zhou & Chen (2023) – A Survey on Contextual Word Representations


This survey explores various contextual word representation models like BERT and
ELMo. These models enhance NLP tasks by providing deep semantic understanding,
useful in evaluating the relevance of interview answers.

7
Department of Computer Engineering SCOE Nashik.
Department of Computer Engineering SCOE Nashik.

Chapter 3

Project Plan

The project plan outlines the timeline, milestones, team responsibilities, and resources
required to successfully develop and deploy the Interview Preparation Model. The
project is divided into multiple phases, each with defined goals and deliverables.

Project Resources

• Human Resources: 4 team members (1 ML developer, 1 tester/documenta-


tion lead)

• Hardware: Laptops with at least 16GB RAM, NVIDIA GPU (for training
models)

• Software: Python, TensorFlow/PyTorch, Hugging Face Transformers, Android


Studio, Gradio, Whisper API, GitHub

• Miscellaneous: Internet access, cloud services (optional for model hosting)

3.0.1 Project Estimate

Reconciled Estimates

This project is estimated based on task complexity, resource availability, and time-
line constraints. The time and effort estimates were reconciled across different team
members and phases.

8
Department of Computer Engineering SCOE Nashik.
Department of Computer Engineering SCOE Nashik.

3.1 Project Phases

Phase No. Phase Name Description Duration


1 Requirement Gathering Identify user needs, de- Week 1 – Week 2
fine system scope, and
collect datasets.
2 Research & Literature Study Study existing systems Week 2 – Week 3
and select appropriate
models for NLP and
ASR.
3 System Design Design system architec- Week 4
ture, data flow dia-
grams, and define com-
ponents.
4 Model Training & Testing Fine-tune BERT, im- Week 5 – Week 7
plement sentiment/con-
fidence analysis models.
5 Frontend Development Develop user interface Week 6 – Week 8
using Gradio (or An-
droid Studio).
6 Backend Integration Integrate all modules: Week 7 – Week 9
speech-to-text, NLP,
feedback engine.
7 Testing & Debugging Perform system testing, Week 9 – Week 10
fix bugs, and validate
results.
8 Documentation & Report Prepare project docu- Week 10 – Week 11
mentation and final re-
port.
9 Final Review & Deployment Conduct presentations, Week 12
deploy demo version,
collect feedback.

Table 3.1: Project Phases and Timeline

9
Department of Computer Engineering SCOE Nashik.
Department of Computer Engineering SCOE Nashik.

3.1.1 Risk Management

Risk Identification

• Technical risk: Model underfitting or poor accuracy

• Data risk: Insufficient or noisy training data

• Integration risk: Issues combining ML model with the mobile app

• Timeline risk: Delays in any phase causing cascading delays

Risk Analysis

• High likelihood and impact: Data availability, ML model performance

• Medium: Web-ML integration issues

• Low: Hardware/software compatibility

Overview of Risk Mitigation, Monitoring, Management

• Use pre-trained models (e.g., BERT, Whisper) to reduce training time

• Perform early-stage integration testing

• Maintain backup datasets and version control using Git

• Weekly review meetings for progress and risk re-assessment

3.1.2 Project Schedule

Task Network

A task dependency diagram can show how:

• Data collection must finish before model training starts

• Model integration depends on both training and UI development completion

10
Department of Computer Engineering SCOE Nashik.
Department of Computer Engineering SCOE Nashik.

Week Task
1 Requirements gathering
2–3 Data collection and cleaning
4–6 Model training and evaluation
5–6 UI design in parallel
7 Integration of model and UI
8 Testing and bug fixes
9 Hugging Face deployment
10 Documentation and presentation

Table 3.2: Project Timeline

Timeline Chart

3.1.3 Team Organization

Management Reporting and Communication

• Weekly team meetings (in person or online)

• Use of GitHub for code collaboration and versioning

• Google Docs for collaborative report writing

• WhatsApp group or Slack for daily communication updates

• Weekly progress reporting to the project guide/mentor

3.2 Tools and Technologies

• Programming Language: Python

• Frameworks: Hugging Face, PyTorch, Gradio, Android Studio

• Models: BERT, Whisper (ASR), Sentiment Classifier

• Collaboration Tools: GitHub, Google Docs, Trello (for task management)

11
Department of Computer Engineering SCOE Nashik.
Department of Computer Engineering SCOE Nashik.

3.3 Risk Analysis and Mitigation

Risk Impact Mitigation Strategy


Model Inaccuracy Medium Fine-tune with custom datasets and ap-
ply cross-validation
Audio Noise or Accent Variations Medium Use Whisper for robust ASR handling
Integration Challenges Medium Develop modules independently and
link using APIs
Time Overrun High Weekly reviews and strict phase dead-
lines

Table 3.3: Risk Analysis and Mitigation

3.4 Team Roles and Responsibilities

Member Role Responsibilities


Ritesh Akhade Team Leader, Developer Project development
and ML model train-
ing
Niranjan Rathod Deployment Engineer,Planning Project deployment
on Hugging Face,
Project ideation sup-
port,research paper
analysis, and develop-
ment
Ashutosh Wagh UI Designer UI design and devel-
opment
Gaurav Jadhav Documentation research paper analy-
sis, and development

Table 3.4: Team Roles and Responsibilities

12
Department of Computer Engineering SCOE Nashik.
Department of Computer Engineering SCOE Nashik.

3.5 Deliverables

• Working Interview Preparation System (Web/App)

• Final Documentation Report

• Presentation Slides

• Demo Video (if applicable)

13
Department of Computer Engineering SCOE Nashik.
Department of Computer Engineering SCOE Nashik.

Chapter 4

SOFTWARE REQUIREMENT SPECIFICATION

4.1 Assumptions and Dependencies

• The system assumes the user will have access to a stable internet connection
for voice input processing and model inference.

• The performance of speech recognition may depend on the clarity of the user’s
microphone and ambient noise conditions.

• The model assumes that responses will be in English, as the NLP components
are trained primarily on English datasets.

• The system depends on external libraries such as Hugging Face Transformers,


Whisper (OpenAI), and Gradio for user interaction.

4.2 Functional Requirements

4.2.1 System Feature 1: Speech Input and Transcription

• The system shall accept speech input from the user during mock interview
sessions.

• The system shall transcribe the spoken input into text using a speech recognition
engine (e.g., Whisper).

4.2.2 System Feature 2: Contextual Analysis

• The system shall compare the user’s answer with an expected or ideal answer
using a fine-tuned BERT model.

14
Department of Computer Engineering SCOE Nashik.
Department of Computer Engineering SCOE Nashik.

• The system shall generate a similarity score or relevance rating.

4.2.3 System Feature 3: Sentiment Analysis

• The system shall analyze the sentiment (positive, negative, or neutral) of the
user’s response text.

• Sentiment results shall contribute to the emotional tone analysis in the feedback.

4.2.4 System Feature 4: Confidence Analysis

• The system shall assess the user’s vocal confidence based on audio characteristics
such as pitch, volume, and pause frequency.

• The system shall display a confidence level (e.g., high, medium, low) with sup-
porting feedback.

4.2.5 System Feature 5: Feedback Display

• The system shall provide feedback in real-time or after each question, combining
context, sentiment, and confidence analysis.

• The system shall allow users to review past performances through session logs.

4.3 External Interface Requirements

4.3.1 User Interfaces

• A graphical interface (e.g., Gradio or Android app) shall allow users to:

– Record and submit responses.


– View analysis and feedback.
– Navigate between questions or sessions.

4.3.2 Hardware Interfaces

• Microphone input device required for voice capture.

• Speakers or a display device required for audio/visual feedback (if applicable).

15
Department of Computer Engineering SCOE Nashik.
Department of Computer Engineering SCOE Nashik.

4.3.3 Software Interfaces

• Integration with Whisper (OpenAI) or Google Speech API for transcription.

• Integration with BERT model (via Hugging Face Transformers) for context
analysis.

• Use of NLP libraries such as NLTK or TextBlob for sentiment analysis.

4.3.4 Communication Interfaces

• The system may communicate with external APIs or cloud-based ML models.

• RESTful API endpoints shall be used for backend communication (if applica-
ble).

4.4 Nonfunctional Requirements

4.4.1 Performance Requirements

• The system should process and return feedback within 3–5 seconds per response.

• It must support concurrent sessions if deployed as a web app.

4.4.2 Safety Requirements

• The system must ensure that microphone access is explicitly permitted by the
user.

• It should handle errors gracefully (e.g., audio input not detected).

4.4.3 Security Requirements

• User data and session logs must be stored securely, with privacy maintained.

• Transmitted audio or text data should be encrypted during communication with


APIs.

16
Department of Computer Engineering SCOE Nashik.
Department of Computer Engineering SCOE Nashik.

4.4.4 Software Quality Attributes

• Usability: The interface must be user-friendly for non-technical users.

• Reliability: The system must handle unexpected input and still provide mean-
ingful feedback.

• Maintainability: The code should be modular to allow easy updates to models


and logic.

4.5 System Requirements

4.5.1 Database Requirements

A lightweight database (e.g., SQLite, Firebase) shall be used to store:

• User session logs

• Transcription texts and evaluation scores

• System configurations

4.5.2 Software Requirements (Platform Choice)

• Python 3.x for backend processing

• Gradio for prototyping or Android Studio (Java) for mobile app

• Libraries: Transformers (Hugging Face), OpenAI Whisper, NLTK, PyTorch/Ten-


sorFlow

4.5.3 Hardware Requirements

• Minimum 4 GB RAM, multi-core CPU for local execution

• GPU (optional but recommended) for faster inference of large models

• Microphone and speakers (if voice input/output are used)

17
Department of Computer Engineering SCOE Nashik.
Department of Computer Engineering SCOE Nashik.

Chapter 5

SYSTEM DESIGN

5.1 System Architecture

Figure 5.1: System Architecture Flowchart

18
Department of Computer Engineering SCOE Nashik.
Department of Computer Engineering SCOE Nashik.

The system is designed to help users prepare for interviews by analyzing their spoken
responses using Natural Language Processing (NLP) techniques. The architecture
consists of the following layers:

1. User Interface Layer:

• A web or mobile app interface (built with Gradio or Android Studio) where
users interact with the system.
• Users speak their responses into the system.

2. Audio Processing Layer:

• Audio input is processed using Whisper (OpenAI’s ASR) to transcribe


speech to text.

3. NLP Analysis Layer:

• A BERT-based model analyzes the transcribed text for:


– Contextual relevance to the interview question.
– Confidence in the response using classifier output probabilities.
– Sentiment (positive, neutral, negative) using a sentiment analysis model.

5.2 Mathematical Model

Let the system be represented as a 5-tuple:

S = {I, O, P, F, R}

Where:

• I = Input set = {Audio response}

• O = Output set = {Transcribed text, Context score, Confidence score, Senti-


ment label}

• P = Processes = {Transcription (T), Context Analysis (C), Confidence Scoring


(S), Sentiment Analysis (A)}

19
Department of Computer Engineering SCOE Nashik.
Department of Computer Engineering SCOE Nashik.

• F = Functions mapping inputs to outputs:

T : Audio → Text

C : Text → Context relevance score


S : Text → Confidence score
A : Text → Sentiment label

• R = Set of Rules and Conditions for model predictions and feedback thresholds

5.3 Data Flow Diagrams (DFD)

Figure 5.2: Data Flow Diagram (DFD)

The Data Flow Diagram (DFD) illustrates the flow of data within the Interview
Preparation Model. The user interacts with the system by answering audio-based
questions, which are transcribed using the Whisper model. The transcriptions are
analyzed for context, confidence, or sentiment, and results are stored and used to
generate feedback. All data, including questions, transcriptions, and analysis results,
are maintained in structured storage.

20
Department of Computer Engineering SCOE Nashik.
Department of Computer Engineering SCOE Nashik.

5.4 Entity Relationship Diagram (ERD)

Figure 5.3: Entity Relationship Diagram (ERD)

The ERD illustrates the database structure of the Interview Preparation Model. It
includes four main entities: users, questions, answers, and analysis results. Each user
can submit multiple answers to various questions, and each answer is analyzed to
generate feedback including confidence, relevance, or sentiment. The relationships
ensure that data is linked for personalized and question-specific evaluations.

21
Department of Computer Engineering SCOE Nashik.
Department of Computer Engineering SCOE Nashik.

5.5 UML Diagrams

1. Use Case Diagram

Figure 5.4: Use Case Diagram

This use case diagram outlines the user interactions with the interview preparation
system. The user can start an interview session, submit answers, receive feedback,
retry a question, proceed to the next one, or end the session.

22
Department of Computer Engineering SCOE Nashik.
Department of Computer Engineering SCOE Nashik.

2. Activity Diagram

Figure 5.5: Activity Diagram

This activity diagram shows the internal process flow—from receiving user input
to performing transcription, branching into context or sentiment analysis based on
the answer type, and generating feedback.

23
Department of Computer Engineering SCOE Nashik.
Department of Computer Engineering SCOE Nashik.

3. Sequence Diagram

Figure 5.6: Sequence Diagram

The sequence diagram shows the step-by-step interaction between components


in the interview system. The user answers a question, which is transcribed by the
Whisper model. The Analyzer then evaluates the response—technically or sentimen-
tally—based on the question type. Finally, feedback is displayed to the user, who can
choose to retry, move to the next question, or end the session.

24
Department of Computer Engineering SCOE Nashik.
Department of Computer Engineering SCOE Nashik.

Chapter 6

PROJECT IMPLEMENTATION

The implementation phase involved building and integrating the core modules of
the system — including speech recognition, natural language processing, confidence
evaluation, and feedback generation — into a unified platform. Each module was
developed, tested, and then integrated in a step-by-step manner to ensure smooth
system functionality.

6.0.1 Module-Wise Implementation

User Interface (UI)

Tools Used: Gradio (Web)


Functionality:

• Accepts user input (voice or text)

• Displays questions

• Shows feedback visually (progress bars, sentiment tags, scores)

UI Features:

• Clean layout

• Retry option

• Response recording and real-time updates

25
Department of Computer Engineering SCOE Nashik.
Department of Computer Engineering SCOE Nashik.

Voice Input & Transcription

Model Used: OpenAI’s Whisper


Implementation:

• Captures audio through microphone

• Converts speech to text

• Handles noisy backgrounds and multiple accents

Libraries: whisper, pydub

Listing 6.1: Whisper Transcription Code


import w h i s p e r
model = w h i s p e r . l o a d m o d e l ( ” base ” )
r e s u l t = model . t r a n s c r i b e ( ” r e s p o n s e . wav” )
text = result [ ” text ” ]

Context Analysis Module

Model Used: BERT (Bidirectional Encoder Representations from Transformers)


Purpose: Evaluates semantic similarity between the expected answer and the
user’s response.
Implementation Steps:

• Tokenize question and response

• Use cosine similarity on BERT embeddings

Output: Context Relevance Score (0–100%)

Listing 6.2: BERT Context Similarity


from s e n t e n c e t r a n s f o r m e r s import SentenceTransformer , u t i l
model = S e n t e n c e T r a n s f o r m e r ( ’ a l l −MiniLM−L6−v2 ’ )
s c o r e = u t i l . c o s s i m ( model . encode ( u s e r a n s w e r ) , model . encode

26
Department of Computer Engineering SCOE Nashik.
Department of Computer Engineering SCOE Nashik.

Sentiment Analysis Module

Model Used: Fine-tuned BERT or pre-trained sentiment model


Functionality: Classifies the emotional tone of the user’s answer as:

• Positive

• Neutral

• Negative

Libraries: transformers, torch

Listing 6.3: Sentiment Classification Code


from t r a n s f o r m e r s import p i p e l i n e
c l a s s i f i e r = p i p e l i n e ( ” sentiment −a n a l y s i s ” )
sentiment = c l a s s i f i e r ( user answer )

Confidence Estimation

Inputs:

• Speech features: pitch, pauses, volume (using librosa, parselmouth)

• Text features: hesitation words, repetition, length of response

Technique: Rule-based logic or logistic regression to compute a Confidence Score

Listing 6.4: Audio Feature Extraction


import l i b r o s a
y , s r = l i b r o s a . l o a d ( ” r e s p o n s e . wav” )
tempo , = l i b r o s a . bea t . b e a t t r a c k ( y=y , s r=s r )

6.0.2 Testing During Implementation

• Unit testing for individual modules

• Integration testing for system workflows

• Edge case handling (e.g., long pauses, irrelevant responses, silent input)

27
Department of Computer Engineering SCOE Nashik.
Department of Computer Engineering SCOE Nashik.

6.0.3 Sample Output

The system outputs simple feedback: Relevant / Not Relevant, Confident / Not
Confident.

28
Department of Computer Engineering SCOE Nashik.
Department of Computer Engineering SCOE Nashik.

Chapter 7

SOFTWARE TESTING

7.1 Software Testing

Software testing is a critical phase in the software development lifecycle that ensures
the system functions as intended and meets user requirements. In this project, com-
prehensive testing was conducted to validate the accuracy, reliability, and robustness
of the Interview Preparation Model with Context, Confidence, and Sentiment Anal-
ysis.
The goal was to verify each module individually and ensure seamless integration
of all components.

7.1.1 Types of Testing

To ensure the reliability and accuracy of the Interview Preparation Model, the fol-
lowing types of testing were conducted:

• Unit Testing
Purpose: To test individual components such as audio transcription, sentiment
analysis, context detection, and confidence scoring.
Tools Used: PyTest (for Python-based modules)

• Integration Testing
Purpose: To test how well the transcription module integrates with the BERT-
based NLP model and sentiment analyzer.
Ensured smooth data flow between Whisper, BERT, and the UI layer (Gra-
dio/Android).

29
Department of Computer Engineering SCOE Nashik.
Department of Computer Engineering SCOE Nashik.

• System Testing
Purpose: End-to-end testing of the application across different platforms (Gra-
dio web UI, Android app).
Checked functionality of uploading audio, getting feedback, and viewing results.

• User Acceptance Testing (UAT)


Conducted with a group of 10 users simulating mock interviews.
Verified that the model’s feedback was useful, easy to understand, and accurate.

• Performance Testing
Evaluated the model’s response time for audio uploads of varying lengths.
Confirmed that results are returned within acceptable limits (approximately
5–10 seconds for 1-minute audio).

• Regression Testing
After each update or bug fix, regression testing was done to ensure existing
functionalities remained intact.

30
Department of Computer Engineering SCOE Nashik.
Department of Computer Engineering SCOE Nashik.

7.1.2 Test Cases and Test Results

Test Case ID Description Input Expected Actual Re- Status


Result sult
TC001 Audio tran- 30s clear Correctly Correct tran- Pass
scription speech transcribed scription
accuracy text
TC002 Sentiment Positive state- Positive senti- Positive senti- Pass
analysis ment ment ment
TC003 Sentiment Negative Negative sen- Negative sen- Pass
analysis statement timent timent
TC004 Context Answer to High context High context
relevance de- “Tell me match match
tection about your-
self”
TC005 Confidence Hesitant tone Low confi- Low confi- Pass
scoring in voice dence score dence score
TC006 Integration: Audio with Transcribe + Successful in- Pass
Transcription mixed emo- analyze senti- tegration
→ Sentiment tions ment
TC007 UI functional- Upload but- File upload & Successful up- Pass
ity (Gradio) ton click processing load & result
display
TC008 Android app: Record 1- Accurate re- Result dis- Fail
Audio record- minute answer sult in <10 played in 8 sec
ing + feed- sec
back
TC009 Invalid input Corrupted au- Show error Error message Fail
handling dio file message displayed
TC010 System load 10 users up- System should All processed Fail
(stress test) load simulta- handle con- within time
neously current re- limit
quests

Table 7.1: Test Cases and Test Results for Interview Preparation Model
31
Department of Computer Engineering SCOE Nashik.
Department of Computer Engineering SCOE Nashik.

Chapter 8

RESULTS AND EVALUATION

8.1 Result Screenshot

Figure 8.1: user interface

32
Department of Computer Engineering SCOE Nashik.
Department of Computer Engineering SCOE Nashik.

Figure 8.2: positive analysis - answer is contextually relevant and confident .

33
Department of Computer Engineering SCOE Nashik.
Department of Computer Engineering SCOE Nashik.

Figure 8.3: negative analysis - answer is contextually irrelevant and not confident

34
Department of Computer Engineering SCOE Nashik.
Department of Computer Engineering SCOE Nashik.

Figure 8.4: unconfident but relevant

35
Department of Computer Engineering SCOE Nashik.
Department of Computer Engineering SCOE Nashik.

Figure 8.5: sentiment analysis

36
Department of Computer Engineering SCOE Nashik.
Department of Computer Engineering SCOE Nashik.

Chapter 9

CONCLUSION

9.0.1 Conclusions

The Interview Preparation Model developed in this project effectively integrates con-
text recognition, confidence scoring, and sentiment analysis to evaluate user
responses to interview questions. By leveraging BERT for context analysis, Whis-
per for speech-to-text transcription, and sentiment models to gauge emotional tone,
the system provides detailed and insightful feedback to users. This holistic approach
enables users to improve not only the relevance and coherence of their answers
but also their delivery and emotional expressiveness, which are critical for suc-
cessful interviews. The model has shown promising results in guiding users toward
more confident and contextually accurate responses.

9.0.2 Future Work

• Multilingual Support: Expand the model to support multiple languages to


assist non-English speakers in preparing for interviews.

• Advanced Emotion Detection: Incorporate facial expression and vocal tone


analysis to better assess user sentiment and stress levels.

• Dynamic Feedback System: Implement real-time feedback during the user’s


response for on-the-fly improvement.

• Personalized Question Sets: Generate customized interview questions based


on the user’s resume or job profile.

• Mobile App Integration: Fully develop and deploy the Android app with an
intuitive UI for broader accessibility and usage.

37
Department of Computer Engineering SCOE Nashik.
Department of Computer Engineering SCOE Nashik.

• Data Collection for Fine-Tuning: Collect a larger, domain-specific dataset


to fine-tune the BERT model for higher accuracy in context analysis.

9.0.3 Applications

• Interview Coaching Platforms: Use the model as a core engine in online


mock interview platforms.

• Corporate HR Tools: Assist HR teams in evaluating candidate responses


during remote or AI-assisted preliminary rounds.

• Career Counseling Centers: Help students and job seekers practice and
improve interview skills with data-driven feedback.

• Language Learning Apps: Support spoken language learning by analyzing


fluency, sentiment, and confidence in speech.

• Soft Skills Training Programs: Serve as a training aid in workshops focused


on communication, public speaking, and confidence-building.

38
Department of Computer Engineering SCOE Nashik.
Department of Computer Engineering SCOE Nashik.

Chapter 10

REFERENCES

[1] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K. (2019). BERT: Pre-training
of Deep Bidirectional Transformers for Language Understanding. Proceedings of the
2019 Conference of the North American Chapter of the Association for Computa-
tional Linguistics (NAACL). This paper introduces BERT, an advanced NLP model
that has been foundational for contextual analysis in language tasks.
[2] Radford, A., Kim, J. W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry,
G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I. (2022). Whisper:
Robust Speech Recognition via Large-Scale Weak
[3] Supervision. OpenAI. Whisper is an ASR model that transcribes audio to text,
useful for converting spoken responses into text for further NLP analysis.
[4] Poria, S., Cambria, E., Bajpai, R., Hussain, A. (2023). A Review of Affective
Computing: From Unified Theories to Practical Applications. IEEE Transactions on
Affective Computing, 10(3), 324–343. This paper covers sentiment analysis appli-
cations, particularly affective computing, which relates to detecting and processing
human emotions—valuable for sentiment detection in interviews.
[5] Liu, B. (2022). Sentiment Analysis and Opinion Mining. Synthesis Lectures on
Human Language Technologies, 5(1), 1-167. A foundational text on sentiment anal-
ysis, describing techniques for identifying emotional cues in text, which are essential
for analyzing interviewer sentiment.
[6] Zhou, J., Chen, J. (2023). A Survey on Contextual Word Representations and
Their Applications in Language Processing. Computational Linguistics, 44(3), 511-
529. This survey explores contextual word representation models like BERT, which
are crucial for analyzing relevance in candidate responses during interview

39
Department of Computer Engineering SCOE Nashik.
Department of Computer Engineering SCOE Nashik.

Appendix-A

Problem Statement and Theoretical Analysis

Problem Statement

Design and implementation of an interview preparation model with context, confi-


dence, and sentiment analysis using machine learning and natural language processing
techniques.

1. Satisfiability Analysis

The Interview Preparation Model’s goal is to evaluate responses based on context,


confidence, and sentiment.
Each response must satisfy the following conditions:

• The answer should match the semantic context of the question.

• The emotional tone and confidence indicators should be measurable.

• The final feedback must be derived from measurable attributes (scores).

Thus, the problem can be reduced to determining whether there exists a set of
analysis outputs (context match %, confidence %, sentiment polarity) that satisfy
pre-defined threshold conditions for acceptable performance.

2. Computational Complexity (NP-Hard / NP-Complete / P)

Analyzing the complexity class of the problem:

40
Department of Computer Engineering SCOE Nashik.
Department of Computer Engineering SCOE Nashik.

Step Complexity Class Reason


Speech-to-Text transcription P Algorithms like
Whisper run in
polynomial time
(with respect to
audio length).
Text Analysis (Context, Sentiment, Confidence) P Transformer-
based models
operate in poly-
nomial time
(given fixed
model sizes).
Overall System Integration P Pipeline-based
processing —
each module can
be computed
efficiently.

Thus, the overall problem lies in Class P (polynomial time problems).


It is not NP-Hard or NP-Complete, because:

• There is no combinatorial explosion or exponential search space.

• Every sub-problem has deterministic polynomial-time solutions.

Conclusion: The problem belongs to Class P and is computationally feasible.

3. Modern Algebra and Mathematical Modeling

Set Theory:
Let:

• Q = set of questions

• A = set of user answers

• C = set of context vectors

41
Department of Computer Engineering SCOE Nashik.
Department of Computer Engineering SCOE Nashik.

• S = set of sentiment scores

• F = set of confidence scores

The functions involved are:

f1 : A → C (Context extraction function)

f2 : A → S (Sentiment analysis function)

f3 : A → F (Confidence estimation function)

The overall system can be modeled as a composition:

f (A) = (f1 (A), f2 (A), f3 (A))

Group Theory:
Operations like merging features (C, S, F ) can be seen as forming an Abelian
Group under feature concatenation:

• Closure: Combining features results in a feature vector.

• Associativity: Order of concatenation does not affect the final vector meaning.

• Identity: Empty vector acts as a neutral element.

• Inverse: Removing a feature is allowed (e.g., disabling sentiment analysis).

Conclusion: The system modeling aligns with structures from Set Theory and
Group Theory under modern algebra.

42
Department of Computer Engineering SCOE Nashik.
Department of Computer Engineering SCOE Nashik.

Appendix-B

Details of Paper Publication:

Name of the Conference/Journal:


International Research Journal of Modernization in Engineering, Technology and
Science (IRJMETS)

Volume: 07 Issue: 03 Month/Year: March 2025

ISSN: 2582-5208
Impact Factor: 8.187

DOI: https://www.doi.org/10.56726/IRJMETS69571
Website: http://www.irjmets.com

Title of the Paper:


Interview Preparation Model with Context and Confidence Analysis

Authors:
Dr. Mahendra Jagtap, Ritesh Akhade, Niranjan Rathod, Ashutosh Wagh, Gaurav
Jadhav
(Sanghavi College of Engineering, Nashik, Maharashtra, India)

43
Department of Computer Engineering SCOE Nashik.
Department of Computer Engineering SCOE Nashik.

Research paper

44
Department of Computer Engineering SCOE Nashik.
Department of Computer Engineering SCOE Nashik.

45
Department of Computer Engineering SCOE Nashik.
Department of Computer Engineering SCOE Nashik.

46
Department of Computer Engineering SCOE Nashik.
Department of Computer Engineering SCOE Nashik.

47
Department of Computer Engineering SCOE Nashik.
Department of Computer Engineering SCOE Nashik.

48
Department of Computer Engineering SCOE Nashik.
Department of Computer Engineering SCOE Nashik.

Certificate

49
Department of Computer Engineering SCOE Nashik.
Department of Computer Engineering SCOE Nashik.

50
Department of Computer Engineering SCOE Nashik.
Department of Computer Engineering SCOE Nashik.

51
Department of Computer Engineering SCOE Nashik.
Department of Computer Engineering SCOE Nashik.

52
Department of Computer Engineering SCOE Nashik.
Department of Computer Engineering SCOE Nashik.

53
Department of Computer Engineering SCOE Nashik.
Appendix-C

54
55

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy