0% found this document useful (0 votes)
14 views74 pages

Final Year Major Project Report

The project report details the development of a 'Recipe Generator Using Food Images' that employs deep learning techniques, particularly Convolutional Neural Networks (CNNs), to identify ingredients from food images and generate corresponding recipes. It outlines the system's architecture, including modules for image processing, ingredient detection, and recipe generation, while also addressing challenges and future enhancements. The report emphasizes the integration of computer vision and natural language processing to provide personalized cooking guidance based on visual inputs.

Uploaded by

saket gupta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views74 pages

Final Year Major Project Report

The project report details the development of a 'Recipe Generator Using Food Images' that employs deep learning techniques, particularly Convolutional Neural Networks (CNNs), to identify ingredients from food images and generate corresponding recipes. It outlines the system's architecture, including modules for image processing, ingredient detection, and recipe generation, while also addressing challenges and future enhancements. The report emphasizes the integration of computer vision and natural language processing to provide personalized cooking guidance based on visual inputs.

Uploaded by

saket gupta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 74

Recipe Generator Using Food Images

Project Report

Submitted in Partial Fulfillment of the Requirements for the Degree of

BACHELOR OF TECHNOLOGY

in

COMPUTER SCIENCE & ENGINEERING

By

Saket Gupta (2100950100070)

Utsav Chaturvedi (2100950100087)

Krish Sharma (2200950109007)

Under the guidance of

Mr. Shashi Kant Mourya

(Assistant Professor)

DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING

MGM’s College of Engineering & Technology, Noida

May 2025
TABLE OF CONTENTS

DECLARATION………………………………………………………………........v

CERTIFICATE……………………………………………………………………..vi

AKNOWLEDGEMENT……………………………………………………….......vii

ABSTRACT……………………………………………………………………….viii

LIST OF FIGURES………………………………………………………………...ix

LIST OF TABLES………………………………………………………………….x

LIST OF ABBREVIATIONS………………………………………………….......xi

CHAPTER 1 (INTRODUCTION, BACKGROUND OF THE PROBLEM,

STATEMENT OF PROBLEM etc.)

1.1 Literature review…………………………………………………………....1

1.1.1 Recipe Generation and Natural Language Processing………………..1

1.1.2 What are the key studies and findings in the field……………………2

1.1.3 Key Highlights of the current state of Recipe Generator Using Food

Images………………………………………………………………………2

1.1.4 Discuss the limitations or gaps you identified in existing literature….3

1.2 Problem definition………………………………………………………….4

1.3 Brief introduction of the project....................................................................5

1.3.1 Plan of Action………………………………………………………...5

1.3.2 Data Collection and Pre-processing………………………………….6

1.3.3 Model Development and Ingredient Recognition……………………6

1.3.4 Recipe Generation……………………………………………………6

1.3.5 Personalization Features……………………………………………...7

1.3.6 Testing and Validation……………………………………………….8

1.3.7 Iterative Improvement………………………………………………..8

ii
1.3.8 Deployment and Future Enhancements………………………………8

1.4 Proposed modules…………………………………………………………..8

1.4.1 Image Processing and Pre-processing Module……………………….9

1.4.2 Food Image Recognition and Ingredient Detection Module…………9

1.4.3 Recipe Generation Module…………………………………………...9

1.4.4 Personalization and Adaptation Module……………………………...9

1.4.5 Nutritional Analysis Module………………………………………...10

1.4.6 User Interface and Interaction Module………………………………10

1.4.7 Testing and Evaluation Module……………………………………...10

1.4.8 Multilingual Support Module………………………………………..10

1.5 Hardware & Software requirements………………….................................11

1.5.1 Hardware Requirements…………………………………………….11

1.5.2 Software Requirements……………………………………………..11

CHAPTER 2 (SYSTEMS ANALYSIS AND SPECIFICATION)

2.1 A functional model…………………………………….………………….14

2.2 A data model…………………………….…..............................................16

2.2.1 Data-Flow Diagram…………………………...................................17

2.2.2 Entity-Relationship Diagram……………………………………….18

2.2.3 Class Diagram……………………………………………………....19

2.3 A process-flow model…………………………………………………….19

2.3.1 Activity Diagram…………………………………………………...20

2.3.2 Sequence Diagram………………………………………………….21

2.4 System Design……………………..……………………………………..22

2.4.1 Design Options……..……………………...……………………….22

2.4.2 Technical Feasibility……………………………………………….23

2.4.3 Operational Viability……………………….….…………………...23

iii
2.4.4 Economic Viability……………………………...............................24

2.4.5 Optimal Design…………………………………………………….24

CHAPTER 3 (MODULE IMPLEMENTATION & SYSTEM INTEGRATION)

3.1 Module Implementation……………….....................................................25

3.2 System Integration…………………………………….............................26

3.3 Integration Testing……………………………………………………….27

3.4 Tools and Technologies Used…………………………………………....27

3.5 Challenges and Solutions………………………………………………...28

3.6 Conclusion……………………………………………………………….28

CHAPTER 4 (TESTING AND EVALUATION)

4.1 Testing………………………………………………...............................29

4.2 Types of Testing Performed……………………………………………..31

4.2.1 Unit Testing………………………………………………………..31

4.2.2 Integration Testing………………………………………………...31

4.2.3 Functional Testing………………………………………………...32

4.2.4 Usability Testing…………………………………………………..32

4.2.5 Performance Testing……………………………………………....32

4.2.6 Security Testing…………………………………………………...33

4.3 Evaluation……………………………………………………………….33

4.4 Challenges Encountered………………………………………………...35

4.5 Conclusion……………………………………………………………....35

CHAPTER 5 (TASK ANALYSIS AND SCHEDULE OF ACTIVITIES)

5.1 Task Decomposition………………………………………….….............36

5.2 Project schedule ………………………………………………………....37

5.3 Task specification…………….………………………………………….38

iv
CHAPTER 6 (PROJECT MANAGEMENT)

6.1 Major Risks and Contingency Plans…………………………………….40

6.2 Risk Identification……………………………………………………….40


6.3 Principle Learning Outcomes…………………………………………....41
6.4 Technical Skills Acquired……………………………………………….41

6.5 Project Management and soft skills……………………………………..42

6.6 Innovation and critical thinking…………………………………………42

6.7 Ethical Consideration and User-Centric Design…………………..….....42

6.8 Overall Project Reflection………………...……………………………..42

6.9 Future Scope……………………………………………………………..43

6.10 Conclusion……………………………………………………………...44

6.11 Result……………………………………………………………...........45

PLAGIARISM REPORT....................................................................................49

RESEARCH PAPER...........................................................................................51

CERTIFICATES…………………………………………………………….....56

APPENDIX A.......................................................................................................58

APPENDIX B.......................................................................................................59

REFERENCES………………………………………………………………....60

v
DECLARATION

I hereby declare that this submission is my own work and that, to the best of my
knowledge and belief, it contains no material previously published or written by
another person nor material which to a substantial extent has been accepted for the
award of any other degree or diploma of the university or other institute of higher
learning, except where due acknowledgment has been made in the text.

Name of Student: Saket Gupta

Roll No.: 2100950100070

Signature:

Name of Student: Krish Sharma

Roll no.:2200950109007

Signature:

Name of Student: Utsav Chaturvedi

Roll No.: 2100950100087

Signature:

vi
CERTIFICATE

This is to certify that Project Report entitled “Recipe Generator Using Food Images”
which is submitted by Saket Gupta, Utsav Chaturvedi, Krish Sharma in partial
fulfillment of the requirement for the award of degree B. Tech. in Department of
Computer Science and Engineering of MGM’s College of Engineering and
Technology which is affiliated by AKTU Lucknow, is a record of the candidate own
work carried out by him/her under my supervision. The matter embodied in this report
is original and has not been submitted for the award of any other degree.

Date: Supervisor Signature:

Name of Supervisor: Mr. Shashi Kant Mourya

Designation: Assistant Professor

vii
ACKNOWLEDGEMENT

It gives us a great sense of pleasure to present the report of the B. Tech Project
undertaken during B. Tech. Final Year. We owe special debt of gratitude to Mr. Shashi
Kant Mourya Department of Computer Science & Engineering, MGM’s College of
Engineering and Technology, Noida for his/her constant support and guidance
throughout the course of our work. His/her sincerity, thoroughness and perseverance
have been a constant source of inspiration for us. It is only his/her cognizant efforts
that our endeavors have seen light of the day.

We also take the opportunity to acknowledge the contribution of Mrs. Karamjeet Kaur
Head, Department of Computer Science & Engineering, MGM’s College of
Engineering and Technology, Noida for his/her full support and assistance during the
development of the project.

We also do not like to miss the opportunity to acknowledge the contribution of all
faculty members of the department for their kind assistance and cooperation during
the development of our project. Last but not the least, we acknowledge our friends for
their contribution in the completion of the project.

Name of Student: Saket Gupta

Roll No.: 2100950100070

Signature:

Name of Student: Krish Sharma

Roll no.:2200950109007

Signature:

Name of Student: Utsav Chaturvedi

Roll No.: 2100950100087

Signature:

viii
ABSTRACT

The integration of deep learning techniques in culinary technology has opened new
avenues for automating recipe generation from food images. This project introduces
a sophisticated system that leverages Convolutional Neural Networks (CNNs) and
advanced image processing algorithms to identify ingredients from uploaded food
images and generate corresponding recipes. The system begins with image
preprocessing to enhance the quality and extract significant features, followed by
ingredient classification using CNNs. These identified ingredients are then used by
the recipe generation algorithm to create detailed recipes, providing users with
comprehensive cooking instructions based on the visual input.

By utilizing a large dataset of food images and recipes, the system learns to generalize
across various cuisines, making it versatile and effective. The project highlights the
potential of combining computer vision and natural language processing to align
visual content with textual recipes, addressing challenges in ingredient recognition
and ensuring coherent recipe generation

This innovative approach not only simplifies the culinary process for users but also
enhances their cooking experience by offering personalized and accurate recipe
suggestions based on the food items at hand. The use of multimodal learning
techniques further enriches the system's capability to understand and process visual
and textual data concurrently, paving the way for more advanced applications in the
future.

ix
LIST OF FIGURES

Sr. No. Figure Number Figure Name Page Number

1 2.1 Functional Model 14


2 2.2 DFD Level 0 17
3 2.3 DFD Level 1 17
4 2.4 ER-Diagram 18
5 2.5 Class Diagram 19
6 2.6 Activity Diagram 20
7 2.7 Sequence Diagram 21
8 6.1 Landing Page 45
9 6.2 Website Description 45
10 6.3 Food Image Dataset 46
11 6.4 Upload Food Image 46
12 6.5 Predicted Dish 47
13 6.6 Recipe Book 48
14 6.7 DrillBit Plagiarism 49
Report
15 6.8 Research Paper 51

x
LIST OF TABLES

Sr. No. Table Number Table Name Page Number

1 Table 5.1 Project Schedule 37

2 Table 5.2 Task Specification 38

xi
LIST OF ABBREVIATIONS

Abbreviation Full Form

AI Artificial Intelligence

ML Machine Learning

DL Deep Learning

CNN Convolutional Neural Network

UI User Interface

UX User Experience

API Application Programming Interface

DB Database

CRUD Create, Read, Update, Delete

HTTP HyperText Transfer Protocol

JSON JavaScript Object Notation

HTML HyperText Markup Language

CSS Cascading Style Sheets

JS JavaScript

JPG/JPEG Image File Formats (Joint Photographic Experts Group / Graphics)

RAM Random Access Memory

CPU Central Processing Unit

xii
Abbreviation Full Form

GPU Graphics Processing Unit

FLASK A Python-based Micro Web Framework

IDE Integrated Development Environment

ROI Region of Interest

OCR Optical Character Recognition (if applicable)

ResNet Residual Network (a type of CNN architecture)

VGG Visual Geometry Group (another CNN architecture)

CLI Command Line Interface

MIT Massachusetts Institute of Technology (referencing Recipe Dataset)

xiii
CHAPTER 1

INTRODUCTION
The "Recipe Generator Using Food Images" is an innovative project designed to utilize deep
learning techniques to transform how individuals engage with food imagery. In the current
digital age, food photography is a popular source of inspiration, with millions sharing and
viewing food-related content on social media platforms. However, recreating these dishes
often poses a challenge for many due to insufficient culinary knowledge or lack of detailed
recipes. This project seeks to fill that gap by developing a system that can generate complete
cooking recipes, including creative dish names, ingredient lists, and step-by-step instructions,
simply from analyzing food images.

The core concept of the project revolves around employing advanced machine learning
models—specifically computer vision and natural language processing—to accurately
identify dish components and convert visual information into text. By doing so, the Recipe
Generator enables users to transform food imagery into practical culinary guidance, helping
them try out new recipes or recreate meals from photos. This technology holds great promise
in fields such as gastronomy, recipe personalization, and food blogging, providing a valuable
tool for both beginner cooks and culinary enthusiasts alike.

By incorporating methods like image classification, object detection, and recipe generation
models, this project highlights the intersection between artificial intelligence and food
technology, offering a useful solution for anyone looking to enhance their cooking experience
based on visual inspiration.

1.1 Literature Review:

The combination of food recognition, recipe generation, and deep learning has seen
significant progress in recent years, fueled by advancements in computer vision and natural
language processing (NLP). Generating recipes from food images is a complex task that
involves both visual recognition and language generation, with several approaches, datasets,
and models contributing to the field.

1.1.1 Recipe Generation and Natural Language Processing:

Generating recipes from images involves not just recognizing food items but also converting
that data into structured text. Early methods used rule-based systems that mapped identified
ingredients to pre-set cooking instructions (Salvador et al., 2017). However, these systems
lacked flexibility and often produced repetitive results.

Neural networks, particularly Recurrent Neural Networks (RNNs) and Long Short-Term
Memory (LSTM) networks, have since been used to generate more coherent and diverse
recipe text. Salvador et al.'s Pic2Recipe system combined CNNs for ingredient recognition
with RNNs for recipe generation, showcasing the potential for modern recipe generation
systems.

More recent text generation models like GPT (Generative Pre-trained Transformer) and
BERT (Bidirectional Encoder Representations from Transformers) have demonstrated the
ability to generate more detailed and contextually appropriate recipes. Zhu et al. (2020)

1
explored multimodal learning, integrating both image and text data to improve the quality and
variety of generated recipes.

1.1.2 What are the key studies and findings in the field:

 Food-101 Dataset (Bossard et al., 2014): Introduced a benchmark food image


dataset, facilitating CNN-based food recognition.
 Pic2Recipe (Salvador et al., 2017): Pioneered combining computer vision and NLP
to generate recipes from food images but highlighted challenges in ingredient
recognition and recipe coherence.
 Deep Food (Kawano & Yanai, 2015): Demonstrated real-time food recognition
using CNNs, focusing on mobile applications in varying environments.
 Recipe1M Dataset (Marin et al., 2019): A large dataset that significantly improved
recipe generation by providing over a million images and corresponding recipes,
enhancing model generalization.
 Multimodal Recipe Generation (Zhu et al., 2020): Showcased the benefits of
combining image and text data, improving recipe diversity and accuracy.

1.1.3 Key Highlights of the current state of Recipe Generator Using Food Images:

Recent advancements in deep learning, computer vision, and natural language processing
have greatly influenced the development of recipe generators using food images. Significant
progress has been made in the following areas:

 Improved Food Image Recognition: CNNs have become highly accurate in classifying
food items and ingredients from images. Models such as ResNet and Inception, along
with large datasets like Food-101 and Recipe1M, have played crucial roles in
improving recognition across a wide range of cuisines and presentations.

 Multimodal Learning Integration: Modern recipe generation systems use both visual
and textual data to better understand the relationship between a dish's appearance and
its ingredients. By combining computer vision with NLP techniques like transformers,
these models can generate more coherent and natural recipe instructions.

 Large Datasets: Publicly available datasets, such as Recipe1M, have contributed


significantly to the progress in this field. These datasets provide millions of images
paired with recipes, enabling models to learn better representations of food and
corresponding recipes.

 Challenges in Ingredient Recognition and Recipe Accuracy: Despite advancements,


difficulties remain in identifying complex dishes with mixed ingredients or
recognizing small components. Similarly, generating precise cooking instructions for

2
intricate recipes is still a challenge, especially when regional variations in cooking
methods come into play.

 Applications and Personalization: Recipe generators are being used in areas like food
blogging and smart kitchen assistants. However, the ability to personalize recipes
based on dietary needs, preferences, or ingredient availability is still in development,
with ongoing research addressing these limitations.

1.1.4 Discuss the limitations or gaps you identified in existing literature:

The current literature on Recipe Generators Using Food Images highlights significant
advancements in the fields of computer vision and natural language processing, but also
reveals several limitations and gaps that hinder the broader applicability and accuracy of
these systems.

 Ingredient Recognition Limitations: One of the major gaps in existing systems is


the difficulty in accurately recognizing all ingredients from a single image. While
models like those used in Pic2Recipe (Salvador et al., 2017) have made strides in
identifying primary ingredients, they often struggle with small or visually ambiguous
components such as spices, sauces, and garnishes. Complex dishes with blended or
obscured ingredients are particularly challenging, and the models may fail to provide
a complete or correct recipe due to these recognition issues.

 Handling Mixed Dishes and Complex Foods: Current literature shows that food
recognition systems perform well on simple, single-item dishes, but encounter
difficulties with mixed or layered dishes. Complex meals such as soups, sandwiches,
or salads, which contain multiple ingredients presented together, can confuse models
that are primarily trained on well-separated, clearly visible food items. For example,
the Recipe M dataset (Marin et al., 2019), despite being a large dataset, does not
adequately address this complexity, limiting the effectiveness of models trained on
such data.

 Dataset Limitations and Biases: The datasets used in food recognition and recipe
generation models, such as Food-101 (Bossard et al., 2014) and Recipe1M (Marin et
al., 2019), are often skewed toward Western cuisines, leading to biases in the models’
ability to recognize dishes from other cultures. This gap results in underrepresentation
of global cuisines, which reduces the system's utility for a more diverse, global

3
audience. Additionally, these datasets may not contain enough variety in food
presentation, limiting the generalization of models to real-world images with varying
quality, lighting, or angles.

 Inaccuracy in Cooking Instructions: While existing systems are relatively


successful at generating ingredient lists, they often struggle to produce coherent and
accurate cooking instructions. Studies such as those on multimodal learning (Zhu et
al., 2020) highlight that while deep learning models can generate recipes, the steps are
often overly simplified or lack the necessary detail for accurate execution. For
example, models may fail to account for specific cooking times, temperatures, or
nuanced techniques, resulting in generic instructions that do not reflect the complexity
of the dish.

 Lack of Personalization and Adaptability: A significant gap in current literature is


the lack of personalization in recipe generation systems. Most models generate
standard recipes without considering user preferences, dietary restrictions, or
ingredient availability. This limits the practical application of these systems, as users
may need to manually adjust the recipes to suit their needs. Furthermore, models are
not yet able to adapt recipes dynamically based on the specific ingredients a user has
at hand, which would enhance their utility for real-world cooking.

 Limited Focus on Nutritional Information: Another gap in the existing literature is


the absence of nutritional analysis in generated recipes. While the focus has largely
been on generating accurate and coherent recipes from food images, little attention
has been given to the nutritional value of the generated recipes. Incorporating
nutritional information based on the identified ingredients and portion sizes would
provide added value, especially for health-conscious users or those with specific
dietary goals.

1.2 Problem Definition:

In the era of social media, food imagery plays a significant role in influencing culinary
choices. However, many individuals struggle to recreate dishes they find appealing due to
a lack of cooking knowledge or experience. The challenge lies in developing an
intelligent system that can analyse food images and generate comprehensive cooking
recipes.

4
Key Objectives:

 To design a deep learning model that accurately identifies ingredients and dish types
from food images.

 To generate detailed recipes, including ingredient lists, cooking instructions, and


catchy titles, based on the identified components.

 To create a user-friendly interface that allows users to upload images and receive
recipes, bridging the gap between inspiration and execution in cooking.

1.3 Brief introduction of the projects:

In today’s digital world food image plays a crucial role in shaping preferences and
choices. With platforms like Instagram and Pinterest upbringing mouth watering food
photos, individuals are increasingly inspired to explore new dishes. However, many
aspiring cooks often find themselves at a loss when it comes to recreating these
visually appealing dishes, primarily due to a lack of cooking knowledge, experience,
or recipe access.

To address this challenge, this project proposes the development of a Recipe


Generator that utilizes advanced deep learning and computer vision techniques to
analyze food images and generate comprehensive cooking recipes. The system will be
designed to recognize various ingredients, dish types, and cooking styles from images,
creating a seamless user experience that transforms visual inspiration into practical
execution.

The Recipe Generator will provide users with a detailed recipe that includes
ingredient lists -by-step cooking instructions, and catchy titles, tailored to the
identified components of the dish. By bridging the gap between inspiration and action,
this project aims to empower individuals .with the tools and confidence, they need to
recreate their favorite dishes in their own kitchens, enhancing their culinary skills and
encouraging creativity in cooking.

Ultimately, this innovative solution seeks to make cooking more accessible,


enjoyable, and intuitive for a wide range of users, from novices to seasoned home
cooks.

5
1.3.1 Plan of Action:

 Clearly define the goals of the project, focusing on improving the accuracy of
ingredient recognition and the quality of generated recipes.

 Identify the target user base (e.g., individuals seeking easy recipe generation, people
with dietary restrictions, etc.).

 Set specific objectives such as personalization, regional diversity in cuisine, and


improved cooking instructions.

1.3.2 Data Collection and Preprocessing:

 Datasets:

• Use existing datasets like Recipe1M and Food-101 for initial model training.

• Supplement with additional datasets, especially those with global and regional
cuisine diversity, to reduce biases.

• Preprocess datasets to ensure high-quality, labelled images and corresponding


recipes.

 Augmentation:

• Perform data augmentation techniques (e.g., rotation, cropping, brightness


adjustments) to simulate real-world image variability.

1.3.3 Model Development and Ingredient Recognition:

 Computer Vision Model (CNN-based):

• Use advanced Convolutional Neural Networks (CNNs) like ResNet, Inception,


or EfficientNet to handle food image recognition.

• Fine-tune pre-trained models on food-specific datasets to improve ingredient


classification.

• Address ingredient overlap and complex presentations by experimenting with


multi-label classification techniques to identify multiple ingredients in a single
image.

6
1.3.4 Recipe Generation:

 Natural Language Processing (NLP) Model:

• Use Recurrent Neural Networks (RNNs) or Transformer models (e.g., GPT,


BERT) for recipe generation. These models will take recognized ingredients
as input and generate the corresponding recipe steps.

• Fine-tune language models on food and cooking-related corpora to ensure


accurate and natural recipe instructions.

 Multimodal Learning:

• Explore multimodal learning techniques that combine visual data (image) and
textual data (recipe) for more accurate recipe generation. This could involve
training a joint embedding space for both images and text

1.3.5 Personalization Features:

 User Preferences and Dietary Restrictions:

• Build a recommendation engine that allows users to input dietary restrictions


(e.g., vegan, gluten-free) and preferences. The system will adapt generated
recipes to meet these criteria.

• Incorporate user-specific ingredient availability and offer recipe suggestions


based on what users have at home.

 Recipe Adjustment:

• Integrate the ability to scale recipes or substitute ingredients dynamically


based on user preferences or ingredient shortages.

1.3.6 Testing and Validation:

 Model Evaluation:

• Evaluate the performance of the food recognition model using accuracy,


precision, recall, and F1-score metrics on test images.

7
• Measure the coherence and usability of generated recipes through human
evaluation, asking participants to review or attempt cooking the generated
recipes.

 User Feedback:

• Deploy a beta version of the system to a small group of users to gather


feedback on the usability, recipe quality, and personalization features.

• Conduct usability testing to ensure the interface is intuitive and easy to


navigate.

1.3.7 Iterative Improvement:

 Based on testing and feedback, iteratively improve the ingredient recognition


accuracy and the naturalness of recipe generation.

 Address any identified issues in the personalization features or interface, refining


based on user needs.

1.3.8 Deployment and Future Enhancements:

 Deployment:

• Develop a web or mobile app interface for user interaction, allowing users to
upload images and receive recipes.

• Ensure scalability and smooth deployment by using cloud-based solutions if


necessary.

 Nutritional Analysis (Future Enhancement):

• Add a nutritional analysis feature that provides information about calories,


macros, and other dietary information based on identified ingredients.

 Multilingual Support (Future Enhancement):

• Expand the system to generate recipes in multiple languages to increase


accessibility for global users.

8
1.4 Proposed Module:

The system can be divided into several key modules, each focusing on a specific aspect of the recipe
generation process, from image recognition to the final output of a personalized, accurate recipe.
Below are the proposed modules.

1.4.1 Image Processing and Preprocessing Module:

 Objective: To prepare the input food images for further analysis by applying
preprocessing techniques.
 Components:
• Image Resizing: Standardize image sizes for consistent input to the deep
learning models.
• Image Enhancement: Apply techniques such as brightness adjustment, contrast
enhancement, and noise reduction.
• Data Augmentation: Generate more training data by applying transformations
like rotations, flips, and colour adjustments to simulate real-world variability.

1.4.2 Food Image Recognition and Ingredient Detection Module:

 Objective: To detect and identify ingredients in the uploaded food image using deep
learning models.
 Components:
• Convolutional Neural Network (CNN): Use models like ResNet or
EfficientNet for food image recognition.
• Ingredient Classification: Perform multi-label classification to identify
multiple ingredients in complex dishes.
• Ingredient Confidence Scoring: Assign a confidence score to each detected
ingredient to ensure accuracy.
• Handling Complex Dishes: Implement techniques to handle visually mixed or
blended dishes by refining recognition algorithms.

1.4.3 Recipe Generation Module:

 Objective: To generate a step-by-step recipe based on the recognized ingredients


using natural language processing (NLP) models.
 Components:
• Recurrent Neural Networks (RNN) or Transformer Models: Utilize RNNs
(such as LSTM) or Transformer models (like GPT-3) to generate recipes in
natural language.
• Recipe Structure Generator: Ensure that generated recipes include clear
sections such as ingredient lists, cooking steps, and preparation time.
• Contextual Recipe Generation: Use multimodal learning to combine the image
data with ingredient lists for a more context-aware recipe generation process.

1.4.4 Personalization and Adaptation Module:

 Objective: To customize recipes based on user preferences, dietary restrictions, or


available ingredients.

9
 Components:
• User Input Preferences: Allow users to input preferences (e.g., vegetarian,
gluten-free, low-carb) and dietary restrictions.
• Ingredient Substitution Engine: Provide recommendations for ingredient
substitutions if certain items are unavailable or restricted.
• Recipe Scaling: Offer options to scale the recipe based on the number of
servings required.
• Real-Time Ingredient Matching: Match the recognized ingredients with the
user’s available inventory or preferences.

1.4.5 Nutritional Analysis Module:

 Objective: To calculate the nutritional value of the generated recipe based on the
identified ingredients.
 Components:
• Ingredient Nutrient Database: Connect the recognized ingredients to a
nutritional database to retrieve information about calories, macronutrients
(carbs, fats, proteins), and micronutrients.
• Nutritional Summary Generator: Generate a detailed nutritional breakdown for
the entire recipe, helping users understand its health value.
• Dietary Goal Alignment: Suggest modifications to the recipe to meet specific
health or fitness goals (e.g., low calorie, high protein).

1.4.6 User Interface and Interaction Module:

 Objective: To provide an intuitive interface for users to upload images, view recipes,
and interact with the system.
 Components:
• Image Upload Interface: Allow users to easily upload food images from their
device.
• Recipe Display Interface: Display generated recipes with clear instructions,
including a breakdown of ingredients and cooking steps.
• Preference Setting: Enable users to set dietary preferences or ingredient
availability for more personalized recipe generation.
• Feedback and Rating System: Allow users to rate the accuracy of the
generated recipe or provide feedback for system improvement.

1.4.7 Testing and Evaluation Module:

 Objective: To ensure the accuracy and performance of the food recognition and recipe
generation models.
 Components:
• Model Accuracy Testing: Evaluate the performance of the CNN for food
recognition using metrics like accuracy, precision, and recall.
• User Evaluation: Gather feedback from users about the usefulness and
accuracy of the generated recipes.
• Continuous Model Improvement: Regularly update and fine-tune models
based on user feedback and newly available data.

10
1.4.8 Multilingual Support Module:

 Objective: To allow the system to generate recipes in multiple languages.


 Components:
• Language Translation Engine: Translate recipes into different languages using
NLP translation models.
• Culturally Specific Recipe Adjustments: Adjust recipes based on regional
ingredients and cooking methods for users from different cultural
backgrounds.

1.4.9 Voice-Assisted Cooking Module:

 Objective: To provide a hands-free, voice-guided cooking experience for users.


 Components:
• Voice Recognition: Implement voice control for navigating through the recipe
steps.
• Step-by-Step Voice Instructions: Provide real-time audio instructions for each
step of the recipe.

1.5 Hardware and Software Requirements:

1.5.1 Hardware Requirements:

 Development Machine / Server:


• Processor: Intel Core i7/i9 or AMD Ryzen 7/9 or higher (multi-core support
for parallel processing).
• GPU: NVIDIA GPU (e.g., RTX 2080, RTX 3080, Tesla T4) with CUDA
support for deep learning tasks.
1. Note: A powerful GPU is essential for training the deep learning
models, especially for tasks like image recognition (CNN) and recipe
generation (NLP models).
• RAM: 16 GB minimum (32 GB recommended).
• Storage:
1. 1 TB SSD (solid-state drive) for fast read/write operations during
training and data handling.
2. Additional HDD for storing datasets if necessary (1–2 TB for large
food image and recipe datasets).
• Network: High-speed internet for downloading pre-trained models, datasets,
and other resources (500 Mbps or higher recommended).
 Deployment Machine (Cloud or Local Server):
• Processor: Intel Xeon or AMD EPYC (multi-core processor).
• GPU: Cloud-based GPUs such as NVIDIA A100 (if using cloud services) or
dedicated NVIDIA GPUs for on-premise solutions.
• RAM: 16 GB minimum (32 GB or more recommended for serving multiple
requests simultaneously).
• Storage: SSD storage of at least 500 GB (for pre-trained models, user data,
logs, etc.).
• Cloud Infrastructure (Optional): Services such as AWS, Google Cloud
Platform (GCP), or Microsoft Azure can be used for scalable cloud hosting.

11
1.5.2 Software Requirements:

 Operating System:
• Development Environment:
1. Linux (Ubuntu 20.04+ recommended) for compatibility with deep
learning frameworks.
2. Windows 10/11 (with WSL2 for Linux compatibility) or macOS
(M1/M2 chips).
• Deployment Environment:
1. Linux-based OS for server deployment (Ubuntu, CentOS, or any other
stable Linux distribution).
 Deep Learning Frameworks:
• TensorFlow 2.x or PyTorch: For building and training CNNs and NLP
models.
1. TensorFlow is widely used for large-scale projects, while PyTorch is
often preferred for research and model experimentation.
• Keras (optional): High-level API for TensorFlow for easy model prototyping.
 Image Processing Libraries:
• OpenCV: For image preprocessing, augmentation, and manipulation.
• Pillow: Python Imaging Library (PIL) for basic image handling.
• scikit-image: For additional image processing tasks (e.g., filtering, image
segmentation).
 Natural Language Processing Libraries:
• Hugging Face Transformers: For recipe generation using models like GPT,
BERT, or custom NLP models.
• spaCy or NLTK: For additional text processing and natural language tasks.
 Data Handling and Storage:
• Pandas: For dataset manipulation (e.g., ingredient lists, recipes, and metadata).
• NumPy: For numerical operations and handling large data arrays.
• SQL / NoSQL Database: To store recipes, user preferences, and generated
data.
1. PostgreSQL (SQL) or MongoDB (NoSQL) for storing user inputs,
preferences, and generated recipe data.
 Web Framework (For Deployment):
• Flask or Django: For building the web-based application where users upload
images and receive recipes.
• Fast API: For building fast, asynchronous APIs for communication between
the front end and the machine learning model.
 Cloud Services (Optional):
• AWS S3 or GCP Cloud Storage: For storing images, datasets, and user-
generated content.
• AWS SageMaker or Google AI Platform: For training and deploying machine
learning models in the cloud.
 Frontend Technologies:
• HTML/CSS/JavaScript: For building the front-end user interface.
• React.js or Vue.js: For creating dynamic and responsive web applications.
• Bootstrap or Material-UI: For UI design to ensure the interface is user-friendly
and mobile-responsive.
 Version Control:
• Git: For managing code versions and collaboration.

12
• GitHub/GitLab/Bitbucket: For hosting code repositories and managing issues.
 Package Management:

• Anaconda or pip: For managing Python packages and dependencies.

 Docker (Optional): For containerizing the application and ensuring consistent


environments across different machines.
 Other Tools:

• Jupyter Notebooks: For model experimentation, data analysis, and


prototyping.
• Tensor Board: For monitoring training progress, model performance, and
visualizing data.
• Hyperparameter Tuning Libraries: Optuna or Ray Tune for tuning deep
learning model parameters efficiently.

13
CHAPTER 2

SYSTEMS ANALYSIS AND SPECIFICATION

2.1 Functional Model:

A functional model represents the logical flow and interaction of different components
within the system, focusing on what the system does rather than how it is implemented. In the
Recipe Generator Using Food Images project, the functional model outlines the key
operations and data transformations required to convert a user-uploaded food image into a
corresponding recipe.

Figure 2.1 : Functional Model

14
Step 1: User Interface

 Description: The User Interface (UI) is the first point of interaction for users. It
allows users to upload food images for analysis.

 Functionality:

• The UI should be intuitive and user-friendly.

• Users can easily navigate the interface to upload images.

• Options for setting user preferences (e.g., dietary restrictions) should be


available.

Step 2: Image Preprocessing

 Description: This step prepares the uploaded images for analysis by applying
preprocessing techniques.

 Functionality:

• Image Resizing: Standardizes image sizes to ensure consistency for the deep
learning model.

• Image Enhancement: Improves image quality through brightness


adjustments, contrast enhancement, and noise reduction.

• Data Augmentation: Generates variations of training data (e.g., rotations,


flips, color adjustments) to simulate real-world variability and enhance model
robustness.

Step 3: Image Recognition Model

 Description: This component employs deep learning techniques to analyze


preprocessed images and identify the ingredients.

 Functionality:

• Utilizes a Convolutional Neural Network (CNN) to process the preprocessed


image and extract features.

15
• Classifies ingredients through multi-label classification to identify multiple
ingredients in complex dishes.

• Generates confidence scores for each identified ingredient to ensure


recognition accuracy.

Step 4: Recipe Generation Algorithm

 Description: This step generates a coherent recipe based on recognized ingredients.

 Functionality:

• Receives the list of recognized ingredients as input.

• Utilizes Natural Language Processing (NLP) techniques, such as Recurrent


Neural Networks (RNNs) or Transformers, to create structured recipe
instructions, including ingredient lists, cooking steps, and preparation times.

• May incorporate multimodal learning to combine visual and textual data for
improved recipe generation.

Step 5: User Output

 Description: This component presents the generated recipe to the user.

 Functionality:

• Displays the recipe in a clear and organized format, including sections for
ingredients and cooking instructions.

• Provides options for users to adjust the recipe based on preferences, such as
scaling the recipe or substituting ingredients.

2.2 Data Model:

The data modeling or data structuring represents the nature of data, and the business logic
to control the data. It also organizes the database. The structures of data are explicitly
determined by the data models. Data model helps to communicate between business
people, who require the computer system, and the technical people, who can fault their
requirements.

16
2.2.1 Data Flow Diagram:

Capture Image Fetch and


Encode Image
User Upload Image System
Recipe Generate Recipe
Provide Recipe Generator Predict Image

Figure 2.2: DFD Level 0

The Data Flow Diagram (DFD), Figure 2.2, for "Recipe Generator" illustrates the
interaction between the User, Recipe Generator, and the System. The user captures and
uploads an image of a dish to Recipe Generator, which fetches and encodes the image
before sending it to the System. The System processes the image to predict the dish or
ingredients and generates a corresponding recipe. This recipe is then sent back to Recipe
Generator, which provides it to the user. The process showcases a streamlined workflow
where the user's image input is transformed into a detailed recipe output through
collaborative processing between Recipe Generator and the System.

Figure 2.3: DFD level 1

17
The Level 1 Data Flow Diagram (DFD), Figure 2.3, for the "Recipe Generator" system
outlines the key processes and data exchanges between the User, Recipe Generator, the
System, and the Recipe Database. The user starts by uploading an image to Recipe
Generator. Recipe Generator sends this image to the System, which processes it and
encodes the image using the DenseNet201 model. The encoded image is then sent back
to Recipe Generator, where it checks for similarity with the recipes stored in the Recipe
Database. Once a matching recipe is found, Recipe Generator retrieves the recipe from
the database and displays it to the user. This diagram effectively captures the detailed
steps and interactions required to convert an uploaded image into a relevant recipe.

2.2.2 Entity Relationship Model:

E-R diagram displays the relationship of entity sets stored in a database. ER diagrams
help to explain the logical structure of databases.ER model help us to analyze data
requirements systematically to produce a well-designed database.

Figure 2.4: E-R Diagram

18
2.2.3 Class diagram:

 Class diagram is a static diagram. It represents the static view of an application.


Class diagram is not only used for visualizing, describing, and documenting
different aspects of a system but also for constructing executable code of the
software application.

 Class diagram describes the attributes and operations of a class and also the
constraints imposed on the system. The class diagrams are widely used in the
modeling of object-oriented systems because they are the only UML diagrams,
which can be mapped directly with object- oriented languages.

 Class diagram shows a collection of classes, interfaces, associations,


collaborations, and constraints. It is also known as a structural diagram.

Figure 2.5: Class Diagram

2.3 A Process Flow Model: This model describes the flow of controls in the system. This is
preparation to see how the system will work when executed.

19
2.3.1 Activity Diagram:

Figure 2.6: Activity Diagram

The activity diagram outlines the process flow for a recipe generator system using
images. Here's a step-by-step explanation of each stage:

1. Registration: The user first registers with the system to create an account.

2. Login: After registration, the user attempts to log in. If the login is unsuccessful,
the process loops back to prompt the user to log in again. If successful, the process
continues to the next step.

3. Pre-Processing: Once logged in, the system performs pre-processing on the


uploaded image. This step may involve enhancing image quality, resizing, or other
techniques to prepare the image for analysis.

20
4. Segmentation: The pre-processed image is then segmented to isolate relevant
regions, such as the specific food items present in the image.

5. Extraction of Food Images: Following segmentation, the individual food items or


components are extracted from the image for further analysis.

6. India-Based Feature Extraction: The system extracts feature from the food images
that are relevant for identifying Indian cuisine or specific regional dishes.

7. Classification using CNN (Convolutional Neural Network): The extracted features


are fed into a CNN, which classifies the type of food based on its learned patterns and
characteristics.

8. Result: Finally, the system generates a result, which could be the identified food
type along with possible recipe suggestions based on the classified food.

2.3.2 Sequence Diagram:

Figure 2.7: Sequence Diagram

This sequence diagram illustrates the process flow for classifying a food image within
a recipe generation system. Below is a detailed breakdown of the key steps involved.

1. Login/Registration: The user initiates the interaction by logging into the system or reg
istering as a new user. This step ensures that the user has access to the system's featur
es and services.

21
2. Successful Login/Registration: Once the user provides the necessary credentials, the s
ystem authenticates the information. Upon successful verification, the user is granted
access to the system, confirming their login or registration.
3. Pre-Processing: The system begins by pre-
processing the uploaded image. This step involves preparing the raw image data for fu
rther analysis, including operations like resizing, normalization, and noise reduction.
4. Feature Extraction: After pre-
processing, the system extracts significant features from the image. This involves iden
tifying key attributes and patterns that are essential for recognizing the content of the i
mage.
5. Segmentation: The system then segments the image, isolating specific regions of inter
est that contain the food items. This step helps in focusing the analysis on relevant par
ts of the image.
6. Classification Using CNN: The segmented image regions are fed into a Convolutional
Neural Network (CNN) for classification. CNN is a deep learning algorithm that is hi
ghly effective in analyzing visual data and recognizing patterns.
7. Food Image Classification: Finally, the system completes the classification process, i
dentifying the food items in the image. The classification results are then provided to t
he user, displaying the recognized food items.

2.4 System Design:

2.4.1 Design Options:

1. Algorithms:

 Convolutional Neural Networks (CNNs) for image classification.


 Support Vector Machines (SVM) for simpler classification tasks.
 Decision Trees for rule-based classification.

2. Data Structures:

 Arrays for storing image pixel data.


 Dictionaries for ingredient classification mapping.
 Graphs for recipe generation based on ingredient relationships.

3. Files:

 Image Files (JPEG, PNG) for storing uploaded images.


 Database Files (SQL) for storing user data, images, ingredients, and recipes.
 Configuration Files (JSON, YAML) for system settings and model parameters.

4. Interface Protocols:

 REST API for communication between frontend and backend.


 GraphQL API for flexible queries between user interfaces and servers.

22
 WebSocket’s for real-time data updates and interactions.

2.4.2 Technical Feasibility:

1. Algorithms:

 CNNs: Technically feasible with current machine learning frameworks (e.g., TensorFl
ow, PyTorch).
 SVM and Decision Trees: Feasible but less accurate for complex image data.

2. Data Structures:

 Arrays and Dictionaries: Efficient and feasible with ample computational resources.
 Graphs: Feasible with efficient graph traversal algorithms for recipe generation.

3. Files:

 Image Files and Databases: Technically feasible with cloud storage solutions (e.g., A
WS, Azure).
 Configuration Files: Easily manageable and feasible.

4. Interface Protocols:

 REST API and GraphQL API: Widely supported and feasible with modern web frame
works.
 WebSockets: Feasible for real-time interactions but might require more resources.

2.4.3 Operational Viability:

1. Algorithms:

 CNNs: High accuracy and reliable for complex image recognition tasks.
 SVM and Decision Trees: Suitable for smaller, less complex tasks.

2. Data Structures:

 Arrays and Dictionaries: Efficient in terms of storage and retrieval times.


 Graphs: Highly effective for managing complex relationships between ingredients and
recipes.

3. Files:

 Image Files: Standard formats ensuring compatibility and ease of access.


 Database Files: Robust for managing large datasets.
 Configuration Files: Easily modifiable to adapt to changing system requirements.

23
4. Interface Protocols:

 REST and GraphQL APIs: Ensure smooth data exchange between components.
 WebSockets: Enhances user experience with real-time updates.

2.4.4 Economic Viability:

1. Costs:

 Algorithms: Costs related to computational resources for training and inference.


 Data Structures: Minimal costs as these are implemented in software.
 Files: Costs for storage solutions depending on usage.
 Interface Protocols: Development and maintenance costs for API and Web
Socket implementation.

2.4.5 Optimal Design:

The best design for this project will prioritize accuracy and user experience while maintainin
g economic Viability.

Chosen Design:

 Algorithm: Convolutional Neural Networks (CNNs) for high accuracy in image recog
nition.
 Data Structures: Arrays and Dictionaries for efficient data handling.
 Files: Image files (JPEG, PNG) and SQL database for robust data management.
 Interface Protocols: REST API for reliable and scalable communication.

24
CHAPTER 3

MODULE IMPLEMENTATION & SYSTEM INTEGRATION

3.1 Module Implementations:

The recipe generator system is composed of several key modules, each handling a specific
and essential functionality. This modular approach allows easier debugging, testing, and
scalability. The implementation phase primarily utilized Python, TensorFlow, PyTorch,
OpenCV, Flask, and supporting libraries. A modular structure ensured that each component
could be developed, tested, and refined independently before final system integration. The
core modules implemented include:

 Image Processing Module: This module is the entry point of the system where users
upload food images. The system accepts images in standard formats such as JPEG and
PNG. It performs crucial preprocessing operations like resizing, normalization, and
denoising to ensure compatibility and quality. Additional features such as cropping to
focus on the main food item and data augmentation for training were also
implemented.

 Ingredient Detection Module: Using advanced Convolutional Neural Networks


(CNNs), this module extracts meaningful features from preprocessed images. Models
such as ResNet, EfficientNet, and MobileNet were explored and fine-tuned using
large food datasets like Food-101. This module returns a list of probable ingredients
or food items detected in the image, forming the basis of recipe generation.

 Recipe Generation Module: Leveraging Natural Language Processing (NLP),


particularly models like GPT-2 and T5, this module generates structured recipe
descriptions. These include titles, ingredient lists, quantities, and step-by-step cooking
instructions. The input to this module is the list of predicted ingredients, which it
transforms into a user-friendly recipe.

 User Interface Module: Developed using HTML, CSS, JavaScript, and Bootstrap,
this web-based UI enables seamless interaction with the backend system. Users can
upload images, view results, and interact with generated recipes. The interface is
designed to be intuitive and responsive, catering to both tech-savvy and non-technical
users.

25
 Database Management Module: Implemented using MySQL, this module stores
user-uploaded images, detected ingredients, and generated recipes. It allows retrieval
of previous queries and supports logging, user history, and system audit trails.

Each module was first prototyped using mock data and independently tested using unit tests.
Subsequent refinements were made before their integration into the overall system.

3.2 System Integration:

System integration played a crucial role in transforming individual components into a


cohesive, fully-functional application. Integration followed a bottom-up approach, starting
from the core backend modules and extending to the user interface. The primary tasks in the
integration phase included data exchange format alignment, module interfacing, API
development, and system orchestration.

The integration steps included:

 Linking the image processing pipeline to the ingredient detection model.

 Connecting the output of ingredient detection to the recipe generation module.

 Developing RESTful APIs using Flask to enable communication between the frontend
and backend.

 Ensuring uniform data formats (JSON) and exception handling mechanisms to


manage module interaction smoothly.

Several integration challenges emerged:

 Data Format Mismatch: Differences in tensor shapes or JSON keys were resolved
by implementing pre- and post-processing wrappers.

 Latency Issues: To manage long model inference times, images were resized to
optimal dimensions, and asynchronous request handling was used.

 Security and Upload Management: File upload validation and rate-limiting


mechanisms were integrated to protect the backend.

Comprehensive integration testing was conducted after every milestone to ensure the
correctness and compatibility of the system. The system was deployed on a cloud platform

26
(AWS/GCP), and APIs were containerized using Docker for easy deployment and scalability.
Static and dynamic tests were run to simulate multiple user scenarios and stress conditions.

The integrated system achieved a smooth, efficient pipeline where the user's uploaded image
is preprocessed, passed through a CNN for ingredient detection, followed by NLP-based
recipe generation, and finally rendered in a visually engaging UI. This full-stack integration
validated the operational and technical feasibility of the project.

To summarize, Chapter 3 demonstrated the successful implementation and seamless


integration of modular components into a complete and intelligent recipe generation system.
This chapter laid the foundation for evaluating system performance, discussed in the
following chapter.

3.3 Integration Strategy:

The top-down integration strategy was primarily followed:

 The user interface was integrated first to serve as the access point.

 Backend endpoints were connected to handle requests from the frontend.

 The machine learning model was linked with the image processor and the recipe
retrieval system.

 The database was finally integrated to fetch recipes based on predictions.

Each integration step was accompanied by unit testing, integration testing, and error handling
validation to ensure data consistency and system stability.

3.4 Tools & Technologies Used:

 Framework: Flask for backend integration

 Database: SQLite (or MongoDB for scalable architecture)

 Front-End: HTML, CSS, JavaScript

 Model Integration: Python-based CNN model served via Flask routes

 Testing Tools: Postman (API testing), Browser Dev Tools (UI validation)

27
3.5 Challenges and Solutions:

• Data Format Mismatch: Encountered inconsistencies between prediction outputs


and database queries. Resolved using standardized label mappings.

• Latency in Prediction: Model inference introduced delays. Optimized by loading the


model only once during server initialization.

• Cross-Origin Requests: Handled CORS issues when integrating frontend and


backend hosted on different ports using Flask-CORS library.

3.6 Conclusion:

Module and system integration were critical in ensuring that the Recipe Generator Using
Food Images project functioned effectively from end to end. Through systematic integration
strategies, rigorous testing, and modular design, the individual components were successfully
combined into a robust, user-friendly, and intelligent application capable of transforming
food images into full recipe suggestions.

28
CHAPTER 4

TESTING AND EVALUATION

4.1 Testing:

The testing phase of the Recipe Generator system aimed to validate functionality, reliability,
accuracy, and user satisfaction. This phase was critical in ensuring that the final system met
user expectations and operated efficiently under varying conditions. Several layers of testing
were performed, each targeting different aspects of the application.

Unit Testing Each module was tested independently using unit testing techniques to ensure
that individual components performed as expected. For the image processing module, tests
verified image format handling, resizing, and normalization processes. The ingredient
detection model was tested on known food images to validate output labels. The recipe
generation module was validated for syntax, relevance, and completeness of the generated
text. Tools like PyTest and unittest were used extensively.

Integration Testing Following unit tests, modules were integrated, and integration testing was
conducted to assess their interaction. The primary focus was on data flow and interface
communication between modules. For instance, ingredient lists from the detection module
were validated to ensure compatibility with the recipe generation model. API endpoints
developed in Flask were tested using Postman to validate input-output consistency.

System Testing System testing assessed the functionality of the entire application in a
simulated real-world environment. Scenarios included various types of food images (e.g.,
single dish, multiple items, blurry photos) to check system resilience. Results showed
consistent behavior across test cases. Testing also confirmed that system latency was within
acceptable bounds, with average response time from image upload to recipe generation
remaining under 5 seconds.

Functional Testing This type of testing ensured that the system met functional requirements.
Key functionalities tested included:

 Uploading images in supported formats (JPEG, PNG)

 Accurate ingredient prediction from food images

 Logical and well-structured recipe generation

29
 Proper storage and retrieval from the database

 Responsive UI interactions All functional aspects passed the test cases defined in the
software requirement specifications (SRS).

Regression Testing After updates and modifications, regression testing was conducted to
ensure that existing functionalities remained unaffected. For instance, when the recipe
formatting logic was modified, previously working features like image upload and ingredient
prediction were re-tested to confirm continued functionality.

Performance Testing Performance metrics were collected to evaluate system efficiency and
scalability:

 Average image preprocessing time: 0.8 seconds

 Ingredient detection time: 2.4 seconds

 Recipe generation time: 1.5 seconds

 Full cycle time (image upload to recipe display): ~5 seconds

 The system successfully handled 100 concurrent user requests with minimal latency
variations. These results demonstrated the system’s capacity to operate in real-time,
suitable for web deployment.

User Acceptance Testing (UAT) UAT was conducted by providing access to a sample group
of end users who tested the platform and provided feedback on usability, clarity, and
accuracy. Key feedback included:

 Appreciation for structured and easy-to-follow recipes

 Suggestions for adding cooking time and difficulty level

 Positive remarks on fast image processing and recipe generation Changes were made
based on this feedback, including improved formatting and addition of step numbers
in instructions.

Bug Tracking and Fixing All bugs and system anomalies found during testing were logged,
categorized by severity, and resolved. A centralized bug tracking sheet helped manage this
process. Most bugs were UI inconsistencies, rare incorrect predictions, or edge-case failures
(e.g., images with poor lighting or non-food items).

30
4.2 Types of Testing Performed:

4.2.1 Unit Testing:

 Purpose: To test individual modules/components in isolation.

 Scope:

• Image upload function

• Image preprocessing

• Model inference logic

• Recipe retrieval logic

• API response structure

 Tools Used:

• Python’s unit test library

• Postman (for API endpoint testing)

4.2.2 Integration Testing:

 Purpose: To ensure that modules work together as expected.

 Scope:

• Frontend and backend communication

• Model prediction and database lookup coordination

• Image-to-recipe full cycle flow

 Outcome: Integration testing confirmed that API requests correctly triggered model
predictions and the retrieval of corresponding recipes.

31
4.2.3 Functional Testing:

 Purpose: To test application behavior against defined requirements.

 Test Cases:

• Image uploads in various formats (JPG, PNG, JPEG)

• Invalid file type upload

• No image upload (edge case)

• Correct display of predicted dish and recipe

• User interaction with UI elements

 Result: The system responded appropriately to all functional scenarios, passing over
95% of functional test cases.

4.2.4 Usability Testing:

 Purpose: To assess the user experience and interface intuitiveness.

 Method: Feedback was gathered from a small group of users to evaluate:

• Ease of navigation

• Visual clarity

• Layout and readability

• Speed of interaction

 Outcome: Positive feedback was received with suggestions to include more UI


animations and tooltips for user guidance.

4.2.5 Performance Testing:

 Purpose: To evaluate system speed, responsiveness, and resource usage.

 Scope:

• Image upload and prediction response time

• Load handling (multiple users uploading images)

32
• Server response under peak load

 Tools Used: Apache JMeter (simulated load test)

 Results:

• Average model response time: 1.7 seconds

• Database query time: ~0.2 seconds

• Handled concurrent requests up to 20 users with minimal lag

4.2.6 Security Testing:

 Purpose: To ensure user data and application integrity are maintained.

 Scope:

• File validation to prevent malicious uploads

• CORS handling for frontend-backend requests

• Input sanitization (to avoid injection attacks)

 Outcome: No critical vulnerabilities found; CORS policy and file-type checks were
successfully enforced.

4.3 Evaluation:

The evaluation process focused on quantifying the system’s performance and usability
through well-defined metrics. These metrics were chosen based on system goals and included
both technical and user-centric parameters.

Accuracy Metrics

 Ingredient Detection Accuracy: Using a labeled test dataset of 1,000 images, the
system achieved an accuracy of 87%.

 Precision & Recall: Precision was calculated at 84% and recall at 86%, indicating
reliable ingredient identification with few false positives or negatives.

Recipe Generation Quality

 Readability: Recipes scored 91% on Flesch Reading Ease tests.

33
 Grammar and Syntax: Evaluated using grammar tools, outputs were found to be 95%
grammatically correct.

 Relevance: Manual review by culinary experts found that 88% of recipes were
relevant to the predicted ingredients.

User Feedback & Satisfaction Surveys conducted post-UAT showed that:

 85% of users found the system easy to use

 80% were satisfied with recipe accuracy

 90% were likely to recommend the tool Feedback also suggested interest in features
like multilingual support and dietary tags.

System Performance

 Throughput: System handled up to 100 concurrent requests with a 95% success rate.

 Response Time: Average of 4.7 seconds per image from upload to final result

 Downtime: Less than 0.2% during testing phase

Scalability & Flexibility The system architecture supports containerization using Docker,
which enables easy scaling on platforms like AWS or GCP. The modular nature allows
integration with future features like voice input, barcode scanning, or nutrition estimators.

Limitations Found During Evaluation

 Reduced accuracy for highly complex or mixed dishes

 Occasional overlap in ingredient detection for similar-looking items

 Dependency on quality and clarity of input image These limitations were documented,
and possible solutions (like better preprocessing, data augmentation, and ensemble
modeling) were proposed for future versions.

Improvement Actions

 Implemented better noise filtering for image preprocessing

 Retrained the model using additional dataset images

 Improved NLP logic for more realistic recipe instructions

34
Conclusion The comprehensive testing and evaluation process confirmed that the Recipe
Generator system is technically sound, user-friendly, and capable of performing its intended
tasks with high reliability. The feedback loop enabled continuous improvement, and
evaluation metrics showed promising results, indicating the system's readiness for
deployment and future enhancements.

4.4 Challenges Encountered:

 Model Inaccuracy on Mixed Dishes: The model struggled with composite food
items or multiple dishes in one image. To address this, the training dataset was
refined, and preprocessing was enhanced.

 File Size Issues: Large image files affected performance. This was mitigated by
setting an upload size limit and compressing images during preprocessing.

 API Timeout: Rare timeouts occurred under high load, resolved by optimizing Flask
server configuration.

4.5. Conclusion:

The testing and evaluation phase validated that the Recipe Generator Using Food Images
project performs reliably and meets its functional requirements. The system provides accurate
predictions, fast responses, and a smooth user experience. Comprehensive testing also
ensured that the application is secure, scalable, and ready for real-world deployment.

The positive evaluation results affirm the robustness of the system, while the identified
improvements form the basis for future updates.

35
CHAPTER 5

TASK ANALYSIS AND SCHEDULE OF ACTIVITIES

5.1 Task Decomposition:

To effectively develop the Recipe Generator system, the project was broken down into
smaller, manageable tasks. Each task contributed to building different components of the
system, allowing for parallel development and simplified debugging. Below is a breakdown
of the major tasks:

 Requirement Analysis

• Understanding user expectations and system functionality.

• Collecting and documenting requirements.

 Dataset Collection and Preprocessing

• Acquiring food images and recipe data.

• Cleaning, resizing, annotating, and augmenting the data.

 Model Development

• Training deep learning models for ingredient detection.

• Fine-tuning NLP models for recipe generation.

 Module Integration

• Combining ingredient detection, recipe generation, and user interface


modules.

 Frontend Development

• Designing the user interface for image upload and recipe display.

 Backend Development

• Setting up servers, APIs, and database connectivity.

 Testing and Debugging

36
• Conducting unit, system, and user acceptance testing.

 Deployment and Documentation

• Hosting the application and preparing final reports and documentation.

5.2 Project Schedule:

The project followed a time-bound schedule, ensuring that all tasks were completed within
the allocated period. The schedule was divided into weekly milestones, as shown below:

Week Tasks

1 Requirement Analysis, Literature Review

2 Data Collection and Cleaning

3 Preprocessing and Model Training

4 `Model Testing and Integration

5 Frontend and Backend Development

6 Testing and Debugging

7 Evaluation and Improvements

8 Final Deployment and Report Writing

Table 5.1: Project Schedule

This schedule ensured proper time management and task distribution among team
members.The Weekly checkpoints and progress reviews were conducted to track the status of
tasks. This approach not only ensured accountability but also allowed for early identification
and mitigation of potential risks or delays. Any deviations from the planned schedule were
promptly addressed through rescheduling or resource adjustments.

37
5.3 Task Specification:

Each task in the development pipeline was specified with its goal, input, output, estimated
effort, duration, and dependencies:

Task Goal Inputs Outputs Effort Duration Dependenc


(hrs) ies

Requirement Define User SRS 8 1 week None


Analysis system expectations document
scope and
features

Data Gather Public Raw datasets 12 2 weeks None


Collection food datasets
images and
recipes

Preprocessing Clean and Raw datasets Preprocessed 10 1 week Data


prepare dataset Collection
data

Model Train Preprocessed Trained 20 2 weeks Preprocessi


Training detection dataset models ng
and
generation
models

Module Connect all Model Integrated 15 1 week Model


Integration functional outputs system Training
modules

UI Design and Wireframes Functional 10 1 week Module


Development build the UI Integration
user
interface

38
Backend Develop Integrated Functional 12 1 week Module
Development APIs and modules backend Integration
server logic

Testing Ensure Final build Bug reports, 14 1 week UI,


quality feedback Backend

Evaluation Final Final Deployed 10 1 week Testing


and performanc application system,
Deployment e analysis Report
and launch

Table 5.2: Task Specification

39
CHAPTER 6

PROJECT MANAGEMENT
6.1 Major Risks and Contingency Plans:

Project management for the Recipe Generator using Deep Learning involves several critical
aspects, including identifying potential risks, formulating mitigation strategies, planning
project execution, and drawing insights from the overall experience. This chapter provides an
in-depth analysis of the risks encountered during the development process and the
contingency plans adopted to ensure smooth project progression.

6.2 Risk Identification:

 Data-Related Risks
• Risk: Inaccurate or insufficient data for training the deep learning models.
• Contingency Plan: Curated and augmented diverse datasets from multiple
sources. Applied data preprocessing and augmentation techniques such as
rotation, cropping, and brightness adjustments to simulate variability and
enrich the training set.
 Model Performance Issues
• Risk: The deep learning model may underperform in real-world scenarios due
to overfitting or poor generalization.
• Contingency Plan: Adopted regularization methods (dropout, early stopping),
used transfer learning, and validated models using cross-validation on a
segmented dataset.
 Infrastructure Challenges
• Risk: Limited computational resources affecting training time and system
responsiveness.
• Contingency Plan: Leveraged cloud-based infrastructure like Google Colab,
AWS, and GPU support for intensive model training. Scaled computing power
as required to accommodate project needs.
 Integration Failures
• Risk: Potential conflicts during module integration (e.g., communication
issues between image processing and recipe generation modules).
• Contingency Plan: Followed modular development and testing strategy,
established API contracts early in the project, and used tools like Postman to
ensure consistent data flow.
 Security and Privacy Concerns
• Risk: Unauthorized access to user-uploaded images or misuse of personal
data.
• Contingency Plan: Implemented secure authentication, HTTPS protocols, and
limited access permissions. Educated users about data handling policies.
 User Acceptance Risks
• Risk: Users may find the application difficult to use or the recipe outputs
irrelevant.
• Contingency Plan: Conducted User Acceptance Testing (UAT) early,
collected feedback, and made iterative UI/UX and model improvements based
on actual user behavior.

40
 Timeline Overruns

• Risk: Tasks taking longer than estimated, delaying final delivery.


• Contingency Plan: Applied Agile methodology with iterative milestones,
maintained a flexible buffer in the project schedule, and adopted parallel
development wherever feasible.
 Post-Deployment Maintenance Risk
• Risk: Post-deployment issues such as bugs or model drift due to changing user
behavior and food trends.
• Contingency Plan: Planned for periodic updates, established a feedback
mechanism, and included retraining schedules to maintain model accuracy and
user satisfaction.
 Budget Constraints
• Risk: Project may exceed estimated budget due to unforeseen requirements or
tools.
• Contingency Plan: Leveraged open-source tools and platforms wherever
possible, optimized resource usage, and secured cloud credits for
development.

6.3 Principle Learning Outcomes:

The development of the Recipe Generator project was a significant learning journey, both
technically and managerially. Here are the key takeaways across multiple dimensions:

6.4 Technical Skills Acquired:

 Deep Learning Frameworks:


• Gained practical experience in using TensorFlow and PyTorch for training
convolutional neural networks (CNNs) for image recognition.
 Image Processing Techniques:
• Understood and applied techniques like normalization, denoising, resizing,
and feature extraction for input preparation.
 Natural Language Processing (NLP):
• Learned how to generate grammatically sound and logically structured recipe
text from recognized ingredients using language models.
 Model Evaluation and Tuning:
• Practiced tuning hyperparameters, evaluating models using precision, recall,
and F1 scores, and improving model robustness using cross-validation.
 Web and API Development:
• Developed RESTful APIs using Flask to connect front-end user inputs to
back-end deep learning models. Gained understanding of client-server
communication.
 Cloud Computing and Deployment:
• Used cloud-based environments for model training and testing. Explored
containerization tools like Docker and cloud platforms for scalable
deployment.

41
 System Integration:
• Learned to combine multiple modules—image input, ingredient detection,
recipe generation—into a seamless system. Ensured compatibility and
communication among components.

6.5 Project Management and Soft Skills:

 Agile Planning:
• Managed tasks using Agile methodology with regular sprint reviews and
feedback loops. This improved adaptability and focused the team on short-
term deliverables.
 Team Collaboration:
• Improved team communication using platforms like Trello and Google
Workspace for document sharing, progress tracking, and issue resolution.
 Risk Management:
• Learned how to anticipate potential problems, formulate solutions in advance,
and pivot plans when necessary to mitigate delays or errors.
 Documentation and Reporting:
• Documented system design, development processes, and testing outcomes
thoroughly, improving report writing and technical articulation.
 Presentation Skills:
• Gained confidence in presenting technical content to both technical and non-
technical audiences through presentations, demos, and Q&A sessions.
 Leadership and Accountability:
• Distributed roles and responsibilities within the team. Took initiative during
critical project phases and ensured timelines were met without compromising
quality.

6.6 Innovation and Critical Thinking:

 While building a solution from scratch, the team encountered several unexpected
hurdles that required creativity and critical thinking. For example, ingredient overlap
was addressed using confidence thresholds and ensemble model strategies.
 Thinking from a user’s perspective added depth to the project by incorporating
intuitive UI designs and user-friendly error handling.
 Devised new strategies for multilingual support and dietary customization, laying a
foundation for future innovations.

6.7 Ethical Considerations and User-Centric Design:

 Ensured transparency in model predictions and respected user data privacy.


 Focused on inclusivity by designing features that could cater to various dietary
preferences in future iterations (e.g., vegan, gluten-free).
 Paid special attention to ensuring that the application is usable by people of all skill
levels, making the system accessible and widely applicable.

6.8 Overall Project Reflection:

The Recipe Generator using Deep Learning successfully combined multiple technical
domains, including computer vision, NLP, and web technologies, to create a user-centric

42
product. The project underscored the importance of structured planning, collaborative
teamwork, and continuous evaluation.

The system demonstrated real-world applicability, with a foundation strong enough to


support future enhancements such as:

 Multilingual recipe generation


 Voice-based user input
 Real-time nutritional estimation
 Integration with smart kitchen devices
 Support for dietary tracking and health monitoring

This experience not only contributed to skill development but also instilled confidence in
handling end-to-end AI projects—from ideation to deployment. It served as a stepping stone
for future ventures in AI, machine learning, and software engineering.

In conclusion, effective project management, proactive risk mitigation, and continuous


learning were key pillars that supported the successful implementation of this innovative
system. The lessons learned will serve as valuable assets in both academic and professional
domains. The holistic approach, blending technical rigor with creative design, proved
essential in turning a conceptual idea into a functional and impactful solution.

6.9 Future Scope:

While the current implementation provides a strong foundational system, there are numerous
opportunities for future improvements and expansions:

 Model Enhancement and Expansion:

• Increased Dataset Diversity: Incorporating a larger and more diverse dataset


(e.g., Recipe1M+) would allow the model to recognize a wider range of dishes
with higher accuracy.
• Multi label Classification: Implement the ability to detect multiple food
items in a single image, which is particularly useful for dishes with several
components.

 Recipe Personalization:

• Dietary Preferences: Allow users to filter or personalize recipe results based


on dietary needs (e.g., vegan, gluten-free, low-carb).
• Ingredient Substitution: Suggest alternative ingredients based on user
preferences or ingredient availability.

 Real-Time Mobile Application:

• Developing a mobile app version of the system with camera integration would
make the solution more accessible and practical for everyday uses.

43
 Multilingual Support:

• Enabling recipe generation in multiple languages would extend the usability of


the platform to a global audience.

 Voice-Enabled Interaction:

• Integrate voice assistants (like Google Assistant or Alexa) for a hands-free,


conversational recipe experience.

 Nutritional Analysis:

• Include automatic calculation of calories, macronutrients, and portion control


suggestions for each recipe based on standard food databases.

 Community and Feedback System:

• Users could rate recipes, upload their own food images, and contribute to the
recipe database, turning the platform into a collaborative cooking assistant.

 Integration with Grocery Platforms:

• Suggest grocery lists based on selected recipes and link with local or online
grocery stores for seamless shopping experiences.

6.10 Conclusion:

The Recipe Generator Using Food Images project successfully demonstrates the potential of
artificial intelligence and deep learning in simplifying everyday tasks such as meal planning
and cooking. By leveraging computer vision techniques and a pre-trained convolutional
neural network (CNN), the system can accurately analyze a food image and generate the most
relevant recipe, including ingredients and preparation steps.Throughout the development
process, the project integrated multiple technical components—image processing, model
inference, database querying, and web interfacing—into a cohesive and interactive
application. The frontend ensures a user-friendly interface, while the backend handles
complex tasks like image classification and data retrieval. Careful planning, modular
development, and structured integration allowed the system to perform efficiently and
reliably.

This project not only showcases a practical application of AI in food computing but also
highlights how deep learning can bridge the gap between visual inputs and actionable
outcomes. The project has strong potential to assist users in identifying unknown dishes,
discovering new recipes, and promoting healthier eating habits.

44
6.10 Result:

Figure 6.1: Landing Page

Figure 6.2: Website Description

45
Figure 6.3: Food Image Dataset

Figure 6.4: Upload food Image

46
Figure 6.5: Predicted Dish

47
Figure 6.6: Recipe Book

48
PLAGIARISM REPORT

Figure 6.7: DrillBit Plagiarism Report

49
Figure 6.7: DrillBit Plagiarism Report

50
RESEARCH PAPER

Figure 6.8: Research Paper

51
Figure 6.8: Research Paper

52
Figure 6.8: Research Paper

53
Figure 6.8: Research Paper

54
Figure 6.8: Research Paper

55
CERTIFICATES

56
57
APPENDIX A
This appendix highlights the front-end design and overall user experience of the Recipe
Generator Using Food Images application.

1. User Interface Overview:

The user interface was developed with a focus on simplicity, accessibility, and
responsiveness. The design ensures users can easily interact with the system, whether on
desktop or mobile devices. The primary components include:

 Image Upload Section: Positioned at the top of the homepage, this section allows
users to upload an image of a food item. Supported formats include .jpg, .png, and
.jpeg. A clear “Upload Image” button is available, along with a preview display of the
selected image.

 Recipe Output Section: Once an image is uploaded and processed, the application
displays the predicted dish name along with its corresponding recipe. This section
includes:

• Dish Title
• Ingredients List
• Step-by-step Cooking Instructions
• Nutritional Information (if available)

 Navigation Menu: A fixed navigation bar provides access to different parts of the
application such as:

• Home
• About
• How It Works
• Contact Us

 Feedback and Control Form: At the bottom of the interface, a contact form allows
users to submit feedback or inquiries, improving interaction and usability.

2. User Experience (UX) Considerations:

 Minimalistic Layout: A clean design minimizes distractions and maintains focus on


functionality.
 Color Palette: Soft and neutral tones were chosen to enhance readability and provide
a welcoming visual appeal.
 Responsive Design: Built using CSS and JavaScript, the interface adjusts seamlessly
across various screen sizes.

58
APPENDIX B
This appendix outlines the architectural, technical, and structural components of the backend
system, excluding actual code implementation.

1. System Architecture:

The backend is built using Python and Flask, functioning as a lightweight web server to
handle client requests. It communicates with the front-end through a RESTful API and
processes food image data for prediction.

2. Main Functional Blocks:

 Image Processing Module: Upon image submission, the backend receives and pre-
processes the image using normalization, resizing, and transformation techniques
compatible with the pre-trained model.
 Prediction Engine: A deep learning model (typically based on CNN architecture
such as ResNet or VGG) is used to identify the food item from the uploaded image.
 Recipe Mapping Module: Once the food item is identified, the system queries a
structured dataset or recipe bank to retrieve the relevant cooking instructions and
ingredient list.

3. Database Design:

 Database Type: A lightweight, document-based or relational database (e.g., SQLite


or MongoDB) is used to store recipes, ingredients, and nutrition data.
 Schema Overview: The database consists of multiple collections/tables:
• Recipes: Contains fields such as recipe_id, dish_name, ingredients,
instructions, nutrition_info.
• Food Labels: Maps predicted labels from the model to recipe_id.
• User Interactions (Optional): Stores uploaded image info and user feedback
for analysis or improvement purposes.

4. Security and Performance:

 Image Validation: Backend checks for valid image formats and applies size limits to
prevent overload.
 API Routing: Clean and well-structured API endpoints ensure seamless
communication between the client and server.
 Scalability Considerations: The backend design allows for easy migration to cloud
platforms like Heroku, Render, or AWS for future scalability and deployment.

59
REFERENCES
[1] Meyers, A., Johnston, N., Rathod, V., Korattikara, A., Gorban, A., Silberman,
N.&Murphy, K. (2015). Im2Calories: Towards an Automated Mobile Vision Food Diary.
ICCV.arXiv:1504.06193

[2] Salvador, A., Hynes, N., Aytar, Y., Marin, J., Ofli, F., Weber, I., & Torralba, A. (2017).
Learning Cross-modal Embeddings for Cooking Recipes and Food Images. IEEE CVPr

arXiv:1707.03496

[3] Kawano, Y., & Yanai, K. (2014). Automatic Expansion of a Food Image Dataset
Leveraging Existing Categories with Domain Adaptation. ECCV Workshops. Link

[4] Wang, X., Min, W., Liu, X., & Jiang, S. (2020). Recipe1M+: A Dataset for Learning
Cross-modal Embeddings for Cooking Recipes and Food Images. IEEE Transactions on
Pattern Analysis and Machine Intelligence. Link

[5] GitHub – Recipe Generation from Images Projects

https://github.com/search?q=recipe+generation+image

[6] TensorFlow Tutorials – Image Classification and Transfer Learning

https://www.tensorflow.org/tutorials/images/classification

[7] Recipe1M+ Dataset – MIT CSAIL https://im2recipe.csail.mit.edu/

[8] Food-101 Dataset – ETH Zurich https://www.vision.ee.ethz.ch/datasets_extra/food-101/

[9] Kaggle – Food Image Classification Challenges https://www.kaggle.com/datasets

[10] Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). ImageNet Classification with
Deep Convolutional Neural Networks. NeurIPS.

[11] He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep Residual Learning for Image
Recognition Conference CVPR.

[12] Simonyan, K., & Zisserman, A. (2014). Very Deep Convolutional Networks for Large-
Scale Image Recognition. arXiv preprint.arXiv:1409.1556

60
[13] Vaswani, A., et al. (2017). Attention is All You Need. NeurIPS. Link

[14] Radford, A., et al. (2021). Learning Transferable Visual Models From Natural Language
Supervision. ICML (CLIP Paper). Link

[15] Chen, M., Dhingra, K., Wu, W., Yang, L., Sukthankar, R., & Yang, J. (2009). PFID:
Pittsburgh Fast-Food Image Dataset. IEEE International Conference on Image Processing
(ICIP).

[16] Zhou, F., Lin, Z., & Brandt, J. (2016). Chef Mapper: A Deep Learning Approach for
Cross-modal Recipe Retrieval. ECCV.

61

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy