0% found this document useful (0 votes)
2 views8 pages

Sistemas II Anexo 3

The document outlines a student activity report for a course on Digital Systems II, focusing on the ESP32 microcontroller's dual-core architecture. The students developed a facial recognition project that organizes patient information in Google Drive and captures wound images via an ESP32-CAM. The project aims to enhance patient care by enabling early detection of wound changes through an automated system.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views8 pages

Sistemas II Anexo 3

The document outlines a student activity report for a course on Digital Systems II, focusing on the ESP32 microcontroller's dual-core architecture. The students developed a facial recognition project that organizes patient information in Google Drive and captures wound images via an ESP32-CAM. The project aims to enhance patient care by enabling early detection of wound changes through an automated system.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

ACTIVITY SUBMITTED BY THE STUDENT

1.​ Course identification: 2BM

Academic program: Biomedical Equipment Maintenance Technology


Course Name: Digital Systems II
Class Number: 5AM

Tematic: ESP32
Objective:
Explore and evaluate the dual-core architecture of the ESP32 microcontroller by identifying the functional differences
between Core 0 and Core 1, and to implement this understanding in the final project by assigning specific tasks to each
core in order to optimize system performance and efficiency.

Didactic strategy:

2.​ Student Identification

Full Name: Angela Mabel Caro Guerra


ID Number: 129189
Date:25/05/2025

Full Name: Liseth Tatiana Romero Casas


ID Number: 128846
Date: 25/25/2025

1.​ Activities Developed

●​ In this case, our first laboratory activity involved using the ESP32 and identifying the functional
differences between Core 0 and Core 1. We used two potentiometers to control the frequency, which
was demonstrated using two LEDs—each LED was controlled by one potentiometer.
●​ The other use of the ESP32 was in our final project, which involved facial recognition. The system
could take a photo in two ways: manually or automatically. If the person was recognized, the photo
was automatically saved in a folder named after them, and their name, ID, and the time of capture
were displayed. If the person was not recognized, the system sent a notification of unrecognized
identity and created a new folder in Google Drive for that person.
2.​ Learning Evidence

First laboratory:

Diagrams:
Hardware:
Software:

TaskHandle_t Task1; Serial.println(xPortGetCoreID()


TaskHandle_t Task2; ); Serial.println(xPortGetCoreID()
);
int led1 = 2; for (;;) {
int led2 = 4; int valorPot1 = for (;;) {
int pot1 = 34; analogRead(pot1); // 0 - 4095 int valorPot2 =
int pot2 = 35; int delayTime = analogRead(pot2); // 0 - 4095
map(valorPot1, 0, 4095, 1000, int delayTime =
void setup() { 10); // de 1Hz a 50Hz (periodo map(valorPot2, 0, 4095, 1000,
Serial.begin(115200); de 1000ms a 10ms) 10); // de 1Hz a 50Hz
pinMode(pot1, INPUT); float freq = 1000.0 / (2 * float freq = 1000.0 / (2 *
pinMode(pot2, INPUT); delayTime); // frecuencia = 1 / delayTime);
pinMode(led1, OUTPUT); (2 * tiempo de medio ciclo)
pinMode(led2, OUTPUT); digitalWrite(led2, HIGH);
digitalWrite(led1, HIGH); vTaskDelay(delayTime /
vTaskDelay(delayTime / portTICK_PERIOD_MS);
xTaskCreatePinnedToCore(Task portTICK_PERIOD_MS); digitalWrite(led2, LOW);
1code, "Task1", 10000, NULL, digitalWrite(led1, LOW); vTaskDelay(delayTime /
1, &Task1, 0); vTaskDelay(delayTime / portTICK_PERIOD_MS);
delay(500); portTICK_PERIOD_MS);
Serial.print("LED2
xTaskCreatePinnedToCore(Task Serial.print("LED1 frecuencia: ");
2code, "Task2", 10000, NULL, frecuencia: "); Serial.print(freq, 2);
1, &Task2, 1); Serial.print(freq, 2); Serial.println(" Hz");
delay(500); Serial.println(" Hz"); }
} } }
}
void Task1code(void * void loop() {
pvParameters) { void Task2code(void * // No se necesita
Serial.print("Task1 running on pvParameters) { }
core "); Serial.print("Task2 running on
core ");

Second laboratory: Final project

Hardware:
Software: For this Project we use Python, Visual studio code and arduino.
The programming for Arduino was as follows: it was used to activate the camera and generate a URL where
we could view the live stream.

Arduino

#include <WebServer.h> static_cast<int>(frame->size())) }


#include <WiFi.h> ; // Encender LED
#include <esp32cam.h> void handleLedOn()
server.setContentLength(frame {
const char* WIFI_SSID = ->size()); digitalWrite(ledPin, HIGH);
"Mabel"; server.send(200, Serial.println("LED
const char* WIFI_PASS = "image/jpeg"); ENCENDIDO");
"Mabel1234"; WiFiClient client = server.send(200, "text/plain",
WebServer server(80); server.client(); "LED ON");
static auto loRes = frame->writeTo(client); }
esp32cam::Resolution::find(32 } // Apagar LED
0, 240); void handleJpgLo() void handleLedOff()
static auto hiRes = { {
esp32cam::Resolution::find(80 if digitalWrite(ledPin, LOW);
0, 600); (!esp32cam::Camera.changeRe Serial.println("LED
const int ledPin = 4; solution(loRes)) { APAGADO");
void serveJpg() Serial.println("SET-LO-RES server.send(200, "text/plain",
{ FAIL"); "LED OFF");
auto frame = } }
esp32cam::capture(); serveJpg(); void setup()
if (frame == nullptr) { } {
Serial.println("CAPTURE void handleJpgHi() Serial.begin(115200);
FAIL"); { Serial.println();
server.send(503, "", ""); if // Configuración de cámara
return; (!esp32cam::Camera.changeRe {
} solution(hiRes)) { using namespace esp32cam;
Serial.printf("CAPTURE OK Serial.println("SET-HI-RES Config cfg;
%dx%d %db\n", FAIL"); cfg.setPins(pins::AiThinker);
frame->getWidth(), } cfg.setResolution(loRes); //
frame->getHeight(), serveJpg(); Aquí está el cambio
cfg.setBufferCount(2); WIFI_PASS); server.on("/cam-lo.jpg",
cfg.setJpeg(80); while (WiFi.status() != handleJpgLo);
bool ok = Camera.begin(cfg); WL_CONNECTED) { server.on("/cam-hi.jpg",
Serial.println(ok ? "CAMARA delay(500); handleJpgHi);
OK" : "CAMARA FAIL"); Serial.print("."); server.on("/led/on",
} } handleLedOn);
// Configuración del LED Serial.println(); server.on("/led/off",
pinMode(ledPin, OUTPUT); Serial.print("http://"); handleLedOff);
digitalWrite(ledPin, LOW); // Serial.print(WiFi.localIP()); server.begin();
Asegura que inicie apagado Serial.println("/cam-lo.jpg"); }
// Conexión WiFi Serial.print("http://"); void loop()
WiFi.persistent(false); Serial.print(WiFi.localIP()); {
WiFi.mode(WIFI_STA); Serial.println("/cam-hi.jpg"); server.handleClient();
WiFi.begin(WIFI_SSID, // Rutas del servidor }

The "Wound Monitoring" project revolutionizes patient care by offering an innovative solution that
streamlines and enhances the detection of wound changes. Utilizing advanced facial recognition, the system
automatically organizes each patient's information in Google Drive. The ESP32-CAM camera, accessible via
URL, allows for the capture of high-quality wound images. Through an intuitive HMI (Human-Machine
Interface), healthcare personnel can perform manual captures, which are instantly uploaded to the patient's
dedicated folder. The most remarkable feature is the automatic comparison function, which alerts about
potential redness or infections, enabling early intervention and preventing severe complications. This not
only reduces the risk of infections but also significantly optimizes the time and effort of medical staff.

Python
import cv2 from datetime import if os.path.exists('token.json'):
import os datetime creds =
import face_recognition import time Credentials.from_authorized_u
import urllib.request import tkinter as tk ser_file('token.json', SCOPES)
import numpy as np from PIL import Image, if not creds or not
from ImageTk creds.valid:
google.oauth2.credentials if creds and creds.expired
import Credentials # --- Configuración de Google and creds.refresh_token:
from googleapiclient.discovery Drive --- creds.refresh(Request())
import build SCOPES = else:
from googleapiclient.errors ['https://www.googleapis.com/ flow =
import HttpError auth/drive.file'] InstalledAppFlow.from_client_s
from googleapiclient.http CREDENTIALS_FILE = ecrets_file(
import MediaFileUpload 'TU_ARCHIVO_CLIENT_SECRET. CREDENTIALS_FILE,
from json' # Reemplaza con tu SCOPES)
google.auth.transport.requests archivo creds =
import Request # FOLDER_NAME = "Seguimiento flow.run_local_server(port=0)
CORRECCIÓN: punto en lugar Heridas" with open('token.json',
de guion bajo 'w') as token:
from def
google_auth_oauthlib.flow authenticate_google_drive(): token.write(creds.to_json())
import InstalledAppFlow creds = None return build('drive', 'v3',
credentials=creds) def 'http://192.168.1.7/cam-lo.jpg'
get_or_create_main_folder(dri
def ve_service): facesEncodings = []
create_folder_if_not_exists(dri return facesNames = []
ve_service, folder_name, create_folder_if_not_exists(dri for file_name in
parent_id=None): ve_service, FOLDER_NAME) os.listdir(imageFacesPath):
try: image =
query = def cv2.imread(imageFacesPath +
f"mimeType='application/vnd. upload_image_to_drive(drive_ "/" + file_name)
google-apps.folder' and service, folder_id, image_path, image = cv2.cvtColor(image,
name='{folder_name}' and image_name): cv2.COLOR_BGR2RGB)
trashed=false" try: try:
if parent_id: media = f_coding =
query += f" and MediaFileUpload(image_path, face_recognition.face_encodin
'{parent_id}' in parents" mimetype='image/jpeg', gs(image)[0]
results = resumable=True)
drive_service.files().list(q=quer file_metadata = {'name': facesEncodings.append(f_codi
y).execute() image_name, 'parents': ng)
items = results.get('files', [folder_id]}
[]) file = facesNames.append(file_name
if not items: drive_service.files().create(bod .split(".")[0])
file_metadata = {'name': y=file_metadata, except IndexError:
folder_name, 'mimeType': media_body=media, print(f"Advertencia: No se
'application/vnd.google-apps.f fields='id').execute() encontró rostro en
older'} print(f"Imagen {file_name}")
if parent_id: '{image_name}' subida con ID: continue
{file.get('id')}")
file_metadata['parents'] = try: drive_service =
[parent_id] os.remove(image_path) authenticate_google_drive()
file = except PermissionError as main_folder_id =
drive_service.files().create(bod e: get_or_create_main_folder(dri
y=file_metadata, print(f"Error al eliminar ve_service)
fields='id').execute() el archivo {image_path}: {e}")
print(f"Carpeta except FileNotFoundError: root = tk.Tk()
'{folder_name}' creada con ID: pass # El archivo ya no root.title("Control de Captura
{file.get('id')}") existe de Heridas")
return file.get('id') except HttpError as error:
else: print(f'Ocurrió un error al video_label = tk.Label(root)
print(f"Carpeta subir la imagen: {error}') video_label.pack()
'{folder_name}' ya existe con
ID: {items[0].get('id')}") # --- Reconocimiento Facial y patient_var = tk.StringVar(root)
return items[0].get('id') HMI --- patient_var.set("Desconocido")
except HttpError as error: imageFacesPath = patient_label = tk.Label(root,
print(f'Ocurrió un error al "C:/Users/HP PAVILION text=f"Paciente Detectado:
crear la carpeta: {error}') GAMING/Desktop/face_recogn {patient_var.get()}")
return None ition_models-master/faces" patient_label.pack()
url =
detected_patient = None except Exception as e: start_button.pack(pady=10)
auto_capture_active = False # print(f"Error al capturar
Estado para captura y subir la imagen: {e}") stop_button = tk.Button(root,
automática else: text="Detener Captura
print("No se ha detectado Automática",
def capture_and_upload(): ningún paciente.") command=stop_auto_capture)
global detected_patient stop_button.pack(pady=10)
if detected_patient: def start_auto_capture():
print(f"Capturando global auto_capture_active def update_frame():
imagen para if detected_patient: global detected_patient,
{detected_patient}...") auto_capture_active = patient_var
try: True try:
imgResponse = print(f"Iniciando captura imgResponse =
urllib.request.urlopen(url, automática para urllib.request.urlopen(url,
timeout=2) {detected_patient}...") timeout=2)
imgNp = auto_capture_loop() imgNp =
np.array(bytearray(imgRespons else: np.array(bytearray(imgRespons
e.read()), dtype=np.uint8) print("No hay paciente e.read()), dtype=np.uint8)
orig = detectado para iniciar captura orig =
cv2.imdecode(imgNp, -1) automática.") cv2.imdecode(imgNp, -1)
orig = cv2.flip(orig, 1) orig = cv2.flip(orig, 1)
def stop_auto_capture(): img = cv2.cvtColor(orig,
timestamp = global auto_capture_active cv2.COLOR_BGR2RGB)
datetime.now().strftime("%Y% auto_capture_active = False img =
m%d_%H%M%S") print("Captura automática Image.fromarray(img)
image_filename = detenida.") imgtk =
f"captura_manual_{detected_ ImageTk.PhotoImage(image=i
patient}_{timestamp}.jpg" def auto_capture_loop(): mg)
image_path = global auto_capture_active video_label.imgtk = imgtk
image_filename if auto_capture_active and
detected_patient: video_label.configure(image=i
cv2.imwrite(image_path, orig) capture_and_upload() mgtk)
root.after(5000,
patient_folder_id = auto_capture_loop) # Repite faces =
create_folder_if_not_exists(dri cada 5000 ms (5 segundos) cv2.CascadeClassifier(cv2.data.
ve_service, detected_patient, haarcascades +
main_folder_id) capture_button = "haarcascade_frontalface_defa
if patient_folder_id: tk.Button(root, text="Capturar ult.xml")\
Imagen", .detectMultiScale(orig,
upload_image_to_drive(drive_ command=capture_and_uploa 1.1, 5)
service, patient_folder_id, d)
image_path, image_filename) capture_button.pack(pady=10) for (x, y, w, h) in faces:
else: face = orig[y:y + h, x:x +
print(f"No se pudo start_button = tk.Button(root, w]
obtener o crear la carpeta para text="Iniciar Captura face_rgb =
{detected_patient}") Automática", cv2.cvtColor(face,
command=start_auto_capture) cv2.COLOR_BGR2RGB)
try: e) "Desconocido", (x, y - 10),
face_encoding = cv2.FONT_HERSHEY_SIMPLEX,
face_recognition.face_encodin patient_label.config(text=f"Pac 0.9, (0, 0, 255), 2)
gs(face_rgb)[0] iente Detectado: except IndexError:
result = {patient_var.get()}") pass
face_recognition.compare_fac cv2.rectangle(orig,
es(facesEncodings, (x, y), (x + w, y + h), (125, 220, except Exception as e:
face_encoding) 0), 2) print(f"Error al actualizar
if True in result: cv2.putText(orig, el frame: {e}")
index = detected_name, (x, y - 10), root.after(30, update_frame)
result.index(True) cv2.FONT_HERSHEY_SIMPLEX, # Actualizar cada 30 ms
detected_name = 0.9, (125, 220, 0), 2)
facesNames[index] else: update_frame()
detected_patient = cv2.rectangle(orig, root.mainloop()
detected_name (x, y), (x + w, y + h), (0, 0, 255),
2)
patient_var.set(detected_nam cv2.putText(orig,

3.​ Conclusions
●​ The use of the ESP32 microcontroller in our project it’s versatility and efficiency, particularly through
its dual-core architecture, which allowed us to manage parallel tasks such as the frequency in two
leds with potentiometers.

●​ Facial recognition implemented with the ESP32 in biomedical settings can significantly reduce the
time medical personnel spend on identification and administrative tasks, allowing for faster access to
patient records and improving overall workflow efficiency in healthcare environments.
1.​ Bibliografía
●​ shari, IF, Satria, MD, e Idris, M. (2022). Optimización del sistema de estacionamiento basado en IoT
mediante reconocimiento facial y de matrículas de vehículos a través de Amazon Web Service y
ESP-32 CAM (CaIngeniería Informática y Aplicaciones, 11 (2), 137–144.
https://core.ac.uk/download/pdf/529682925.pdf
●​ Mehendale, N.(2 de julio de 2022). Detección de objetos mediante ESP32 CAM .
SSRN.https://doi.org/10.2139/ssrn.4152378
●​ Arduino Forum. (2023, junio 6). Reconocimiento facial con ESP32-cam. Arduino Forum.
https://forum.arduino.cc/t/reconocimiento-facial-con-esp32-cam/1156035

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy