0% found this document useful (0 votes)
16 views4 pages

List of Potential Projects

Multimedia Communications ELG5121-CSI7631

Uploaded by

arminaf800
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views4 pages

List of Potential Projects

Multimedia Communications ELG5121-CSI7631

Uploaded by

arminaf800
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

ELG 5121/CSI 7163: MULTIMEDIA COMMUNICATIONS

LIST OF POTENTIAL PROJECTS [FALL, 2024]

1. User Centric Affective Haptic Jacket: This project involves developing a haptic jacket to
simulate various physical sensations, enhancing immersive experiences in virtual environments.
Students will modify an existing jacket design by integrating hardware and programming to
create realistic tactile feedback.
Scenarios to implement: Students are encouraged to propose their own scenarios for the haptic
jacket. For example, one possible scenario could be a gaming simulation where the jacket
simulates the impact of being shot at a specific spot on the body. All proposed scenarios must be
submitted for approval to ensure they align with the project goals and are technically feasible.
Project Exclusivity and Selection: Due to the limited availability of hardware, only two teams
will be selected to work on this project (one will be provided the vest and the other will work on
sleeves). If multiple teams apply, the project will be awarded to the teams that presents the most
compelling proposal and demonstrates a strong ability to complete the project within the given
timeframe.
Resources and Support: All necessary hardware for the project will be provided and must be
returned at the completion of the project. Students will use a Nvidia Jetson microcontroller for
managing the haptic feedback. In certain cases, Arduino or similar controllers may be used as a
substitute, but only after receiving approval.
Expected Outcome: By the end of this project, students should have a functional haptic jacket
tailored to the approved scenarios.

2. Interaction-Based Device Control from Virtual Environment: This project aims to connect
virtual reality (VR) experiences with real-world devices. Students will create a system where
actions in a VR environment, like touching or clicking on objects, trigger physical responses
(like turning on an LED) in the real world. The goal is to build a bridge between virtual
interactions and real-world hardware.
Project Goal: Students will design a system where interactions within a VR world (using a Meta
Quest VR headset) control real-world devices. For example, touching or approaching a virtual
object could light up an LED or activate another device.
Examples of Possible Implementations: When a user interacts with a specific object in the VR
world, a particular LED or set of LEDs lights up in the real world. For example, as a user
approaches a virtual flower in the garden, a real-world LED lights up with the corresponding
color. All proposed scenarios need approval to ensure they are feasible and match the project’s
goals.
Resources and Support:
• Hardware: Sensors, actuators, and other necessary equipment will be provided.
• Microcontroller: Students will use the Nvidia Jetson for device control (with the option
to use Arduino or other controllers with approval).
Expected Outcome: A system where interactions in the virtual world result in real-world
actions, like triggering an LED or controlling a device.
NOTE: You can check out a similar implementation in this video:
https://www.youtube.com/watch?v=BXG7hluVvJw&t=3s

3. Visual Event Detection and Mapping from Video Inputs: This project focuses on
developing a software algorithm that detects moving objects and identifies key events within
three different videos. The primary goal is to transform these visual cues into corresponding
visual signals, which will then be mapped to a grid. The algorithm must be capable of generating
output files that simulate the detected events and provide a visual simulation to demonstrate its
effectiveness. The challenge lies in accurately processing the visual inputs and ensuring that the
signals are correctly mapped to the grid. By the end of the project, students will deliver a
functional algorithm that transforms video data into meaningful, event-based visual signals.
Project Goals:
• Event Detection: Develop an algorithm that accurately identifies a moving object and
categorizes key events from three different videos.
• Signal Generation: The algorithm should output files that contain the corresponding
information simulating the key events from a video. A visual simulation is needed to prove it
works.
• Implementation Details: The core challenge is to create software that processes visual
inputs and translates these into visual signals mapped to the grid.
Expected Outcome: A software algorithm capable of transforming visual cues into visual
signals that are accurately mapped to the grid provided.

4. Development of a Personalized RAG Model for Haptic Research: This project entails the
training of a Retriever-Augmented Generation (RAG) model specifically tailored for haptic
technology research. Students will train their model using provided PDF materials containing
haptics paper. The goal is to refine information retrieval and generation capabilities in the context
of haptic technology.
Project Goals:
• Model Training: Utilize conference papers from the last few years to train a RAG model that
can effectively synthesize information in the field of haptics.
• Comparative Evaluation: Test the trained RAG model against models like GPT-4, Gemini,
and others to assess its performance in generating accurate and relevant content.
Implementation Details:
• Model Customization: Adapt the RAG model to focus specifically on topics relevant to
haptics, enhancing its ability to generate specialized responses.
• Performance Metrics: Establish criteria for evaluating the model's effectiveness in
retrieving and generating information, such as accuracy, relevance, and fluency of the
outputs.
Resources and Support Provided:
• Training Data: Access to a curated collection of recent papers on haptics in PDF format.
Expected Outcome: A fully functional RAG model tailored to haptic technology research,
demonstrating improved performance in information retrieval and generation compared to
general models.

5. Integrating ChatGPT with a Human-Like Robot for Natural Language Interaction: This
project involves integrating ChatGPT with a small human-like robot to enable natural language
interaction between users and the robot. Students will be tasked with implementing a
communication interface that allows the robot to understand and respond to user queries,
requests, or casual conversations using the ChatGPT language model.
Expected Outcome: The project is considered successful if students deliver a human-like robot
capable of engaging in natural language conversations with users, leveraging the ChatGPT
model. The system should be well-documented, user-friendly, and capable of providing coherent
and contextually relevant responses. The presentation should effectively communicate the
integration process and the innovative potential of the system.

6. Teleoperation Robotic Arm for Handshaking Purpose: This project involves the
development of a teleoperation system using robotic arms to simulate handshakes between two
individuals separated by a distance. Each person has a camera capturing their hand motions, and
the teleoperation system transmits control signals over a communication network to replicate the
hand motions on a remote robotic arm.
Expected Outcome: The project is considered successful if students deliver a functional
teleoperation system that enables users to engage in handshakes using robotic arms. The system
should provide a realistic and immersive experience, with a focus on safety and user interaction.
7. AI-Driven Sentiment Analysis Multimedia: In this project, students will develop a tool that
analyzes the sentiment (positive, negative, neutral) in recorded audio or video content. The
system will process pre-recorded multimedia files, providing feedback on the emotional tone of
conversations through visual indicators.
Project Goals:
• Sentiment Analysis of Recorded Content: Use AI models to analyze speech or text from
pre-recorded audio or video, determining the sentiment (positive, negative, or neutral) of the
content.
• Sentiment Dashboard: Display sentiment trends in a dashboard with visual indicators like
graphs or charts, showing emotional shifts throughout the recording.
Expected Output: The expected output of this project is a functional AI-driven tool that
analyzes sentiment from multimedia. The tool will include a user-friendly dashboard that
visualizes emotional trends through graphs and charts. The final system will provide clear
insights into the emotional tone of the multimedia content.

PROJECT IDEAS RELATED TO ‘GENERATIVE AI’

8. AI Recipe Creator: Build an AI system that generates unique recipes based on given
ingredients. Users can input ingredients they have, and the AI suggests creative recipes by
combining them in novel ways. This project could involve natural language processing and a
database of existing recipes to train the model.
9. Customizable Chatbot: Design a chatbot that learns from interactions and personalizes its
responses over time. This project can be built on existing chatbot frameworks, with a focus on
training the AI to recognize user preferences and adjust its conversational style accordingly.
10. Simple AI Music Remix Tool: Create a tool that takes an existing piece of music and uses
AI to remix it into a different genre or style. This could involve analyzing the structure of the
music and applying transformations to alter its tempo, instruments, or mood.
11. Generative Music Composition Software: Design a software that composes music using
AI algorithms. This project would involve training the AI on a variety of music genres and
styles. The software could allow users to input certain parameters (like tempo, mood, genre)
and use AI to compose original music pieces. This could be particularly interesting for
exploring the intersection of technology and creative expression in music.
12. Fashion Design Assistant: Develop an application that suggests fashion designs based on
current trends, user preferences, and input parameters like occasion, weather, or style
preferences. The AI could analyze fashion trends and generate sketches or mock-ups of clothing
items.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy