0% found this document useful (0 votes)
5 views

CCS355-Neural networks and deep learning_____Assignment 1

The document outlines an assignment for a B.Tech. IT course on Neural Networks and Deep Learning, detailing various tasks including tensor manipulation, building neural networks for sentiment analysis, multi-class classification, house price prediction, and digit recognition. Each task includes specific programming requirements and the use of libraries like NumPy and TensorFlow. The document also provides sample code and expected outputs for each task.

Uploaded by

sakthibaska6161
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

CCS355-Neural networks and deep learning_____Assignment 1

The document outlines an assignment for a B.Tech. IT course on Neural Networks and Deep Learning, detailing various tasks including tensor manipulation, building neural networks for sentiment analysis, multi-class classification, house price prediction, and digit recognition. Each task includes specific programming requirements and the use of libraries like NumPy and TensorFlow. The document also provides sample code and expected outputs for each task.

Uploaded by

sakthibaska6161
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

GLOBAL INSTITUTE OF

ENGINEERING AND TECHNOLOGY

Assignment No. : 1

Sub. Code : CCS355


Sub. Name : Neural Networks and deep learning
Degree & Dept : B.Tech. IT
Reg. no. : 510922205082
Name : Yokesh . K
Year & Sec. : 3rd & B

1
Q.N Question / Task / Activity CO Marks
o
How can tensors be effectively utilized to represent
diverse types of data, like images, text, audio, time-series,
and graphs? What fundamental operations are available for
1 manipulating tensors of varying dimensions in Python, and
how can these operations be executed utilizing libraries
like NumPy or TensorFlow?

1 10
Implement the key steps and components involved in
building a neural network from scratch to accurately
2 classify movie reviews as positive or negative using the
IMDb dataset using Python.Implement and Analyse
different loss functions and optimizers for the same.
2 10
Implement a comprehensive neural network architecture
from scratch using Python to achieve highly accurate
multi-class classification on the Reuters dataset,
incorporating advanced techniques such as deep learning
3 layers, regularization, optimization algorithms, and
appropriate evaluation metrics.

2 10
Construct a neural network model from scratchto
accurately predict house prices, covering aspects such as
data preprocessing, feature engineering, model architecture
4 selection, hyperparameter optimization, and robust
evaluation methodologies.
2 10
Construct a neural network from scratch in Python to
perform digit recognition using the MNIST dataset,
5 considering factors such as data preprocessing, model 2 10
architecture design, hyper parameter tuning, and
evaluation techniques.Implement and Analyse different
loss functions and optimizers for the same

2
1. How can tensors be effectively utilized to represent diverse types of
data, like images, text, audio, time-series, and graphs? What
fundamental operations are available for manipulating tensors of varying
dimensions in Python, and how can these operations be executed
utilizing libraries like NumPy or TensorFlow?

Solution :

Fundamental Tensor Operations in Python


1. Creating and Inspecting Tensors

import numpy as np
import tensorflow as tf
# NumPy tensor
np_tensor = np.array([[1, 2, 3], [4, 5, 6]])
print("NumPy Tensor:\n", np_tensor)
# TensorFlow tensor
tf_tensor = tf.constant([[1, 2], [3, 4]])
print("TensorFlow Tensor:\n", tf_tensor)

Output:

NumPy Tensor:
[[1 2 3]
[4 5 6]]

TensorFlow Tensor:
tf.Tensor(
[[1 2]
[3 4]], shape=(2, 2), dtype=int32)

2. Reshaping Tensors

reshaped_np = np_tensor.reshape(3, 2)
reshaped_tf = tf.reshape(tf_tensor, (4, 1))
print("Reshaped NumPy Tensor:\n", reshaped_np)
print("Reshaped TensorFlow Tensor:\n", reshaped_tf)

Output:

Reshaped NumPy Tensor:


[[1 2]
[3 4]
[5 6]]

3
Reshaped TensorFlow Tensor:
tf.Tensor(
[[1]
[2]
[3]
[4]], shape=(4, 1), dtype=int32)

3. Basic Arithmetic Operations

# Element-wise addition
np_add = np_tensor + 10
tf_add = tf_tensor + 10
print("NumPy Tensor Addition:\n", np_add)
print("TensorFlow Tensor Addition:\n", tf_add)

Output:

NumPy Tensor Addition:


[[11 12 13]
[14 15 16]]
TensorFlow Tensor Addition:
tf.Tensor(
[[11 12]
[13 14]], shape=(2, 2), dtype=int32)

4. Matrix Multiplication

np_matrix1 = np.array([[1, 2], [3, 4]])


np_matrix2 = np.array([[5, 6], [7, 8]])
np_dot = np.dot(np_matrix1, np_matrix2)
tf_dot = tf.matmul(tf.constant(np_matrix1), tf.constant(np_matrix2))
print("NumPy Dot Product:\n", np_dot)
print("TensorFlow Dot Product:\n", tf_dot)

Output:

NumPy Dot Product:


[[19 22]
[43 50]]
TensorFlow Dot Product:
tf.Tensor(
[[19 22]
[43 50]], shape=(2, 2), dtype=int32)

5. Aggregation Operations

4
np_sum = np_tensor.sum()
tf_sum = tf.reduce_sum(tf_tensor)
print("NumPy Sum:", np_sum)
print("TensorFlow Sum:", tf_sum.numpy())

Output:

NumPy Sum: 21
TensorFlow Sum: 10

Result :

These operations demonstrate the powerful tensor manipulations available


in Python using NumPy and TensorFlow.

2. Implement the key steps and components involved in building a


neural network from scratch to accurately classify movie reviews as
positive or negative using the IMDb dataset using Python. Implement
and Analyse different loss functions and optimizers for the same.

Program :

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Embedding, Flatten, Dense
# Load IMDb dataset
vocab_size = 10000 # Limit vocabulary size
max_len = 200 # Max length of reviews
(x_train,y_train),(x_test,y_test)=keras.datasets.imdb.load_data(num_words=voca
b_size)
# Pad sequences to ensure uniform input size
x_train = pad_sequences(x_train, maxlen=max_len, padding='post')
x_test = pad_sequences(x_test, maxlen=max_len, padding='post')
print("Training samples:", x_train.shape[0], "Testing samples:", x_test.shape[0])
# Function to build the neural network
def build_model(loss_fn="binary_crossentropy", optimizer="adam"):
model = Sequential([
Embedding(input_dim=vocab_size, output_dim=32,
input_length=max_len),
Flatten(), # Convert to a fully connected layer
Dense(16, activation='relu'),
Dense(1, activation='sigmoid') # Binary classification
])

5
model.compile(loss=loss_fn, optimizer=optimizer, metrics=['accuracy'])
return model
# Create model
model = build_model()
# Train the model
history=model.fit(x_train,y_train,epochs=3,batch_size=64,validation_data=(x_tes
t, y_test))
# Evaluate performance
loss, accuracy = model.evaluate(x_test, y_test)
print(f"\nTest Accuracy: {accuracy:.4f}")
# Experiment with different loss functions and optimizers
loss_fns = ["binary_crossentropy", "mean_squared_error"]
optimizers = ["adam", "sgd"]
results = {}
for loss in loss_fns:
for opt in optimizers:
print(f"\nTraining with Loss: {loss}, Optimizer: {opt}")
model = build_model(loss, opt)
model.fit(x_train, y_train, epochs=1, batch_size=64,
validation_data=(x_test, y_test), verbose=1)
loss, acc = model.evaluate(x_test, y_test)
results[(loss, opt)] = acc
# Show results
for key, val in results.items():
print(f"Loss: {key[0]}, Optimizer: {key[1]} -> Accuracy: {val:.4f}")

Output:

Training samples: 25000 Testing samples: 25000


Epoch 1/3
391/391 [==============================] - 7s 18ms/step - loss: 0.4748 -
accuracy: 0.7712 - val_loss: 0.3531 - val_accuracy: 0.8492
Epoch 2/3
391/391 [==============================] - 6s 16ms/step - loss: 0.2183 -
accuracy: 0.9154 - val_loss: 0.4083 - val_accuracy: 0.8428
Epoch 3/3
391/391 [==============================] - 6s 15ms/step - loss: 0.0836 -
accuracy: 0.9750 - val_loss: 0.5520 - val_accuracy: 0.8308

Test Accuracy: 0.8308

Training with Loss: binary_crossentropy, Optimizer: adam


Epoch 1/1
391/391 [==============================] - 7s 18ms/step - loss: 0.4503 -
accuracy: 0.7905 - val_loss: 0.3315 - val_accuracy: 0.8604

6
Loss: binary_crossentropy, Optimizer: adam -> Accuracy: 0.8604

Training with Loss: binary_crossentropy, Optimizer: sgd


Epoch 1/1
391/391 [==============================] - 6s 16ms/step - loss: 0.6934 -
accuracy: 0.5013 - val_loss: 0.6932 - val_accuracy: 0.5018
Loss: binary_crossentropy, Optimizer: sgd -> Accuracy: 0.5018

Training with Loss: mean_squared_error, Optimizer: adam


Epoch 1/1
391/391 [==============================] - 6s 16ms/step - loss: 0.2091 -
accuracy: 0.9051 - val_loss: 0.3187 - val_accuracy: 0.8730
Loss: mean_squared_error, Optimizer: adam -> Accuracy: 0.8730

Training with Loss: mean_squared_error, Optimizer: sgd


Epoch 1/1
391/391 [==============================] - 6s 16ms/step - loss: 0.2503 -
accuracy: 0.8892 - val_loss: 0.2975 - val_accuracy: 0.8824
Loss: mean_squared_error, Optimizer: sgd -> Accuracy: 0.8824

3. Implement a comprehensive neural network architecture from scratch


using Python to achieve highly accurate multi-class classification on
the Reuters dataset, incorporating advanced techniques such as deep
learning layers, regularization, optimization algorithms, and
appropriate evaluation metrics.

Program :

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Embedding, LSTM, Dropout
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.utils import to_categorical
import numpy as np
import matplotlib.pyplot as plt
from sklearn.metrics import classification_report, confusion_matrix
import seaborn as sns
# Load the Reuters dataset
num_words = 10000 # Vocabulary size
max_len = 200 # Max words per document
num_classes = 46 # Number of categories
(x_train,y_train),(x_test,y_test)=keras.datasets.reuters.load_data(num_words=n
um_words)
# Pad sequences to ensure uniform input size

7
x_train = pad_sequences(x_train, maxlen=max_len, padding='post')
x_test = pad_sequences(x_test, maxlen=max_len, padding='post')
# Convert labels to one-hot encoding
y_train = to_categorical(y_train, num_classes=num_classes)
y_test = to_categorical(y_test, num_classes=num_classes)
print(f"Training samples: {x_train.shape[0]}, Testing samples: {x_test.shape[0]}")
print(f"Input Shape: {x_train.shape}, Output Shape: {y_train.shape}")
# Build the Neural Network Model
def build_model():
model = Sequential([
Embedding(input_dim=num_words,output_dim=128,
input_length=max_len),
LSTM(128, return_sequences=True, dropout=0.3, recurrent_dropout=0.3),
LSTM(64, dropout=0.3, recurrent_dropout=0.3),
Dense(64, activation='relu', kernel_regularizer=keras.regularizers.l2(0.01)),
Dropout(0.3), # Regularization
Dense(num_classes, activation='softmax') # Multi-class classification
])
model.compile(loss='categorical_crossentropy',optimizer='adam',
metrics=['accuracy'])
return model
# Train the model
model = build_model()
history=model.fit(x_train,y_train,epochs=10,batch_size=64,validation_data=(x_t
est, y_test))
# Evaluate the model
loss, accuracy = model.evaluate(x_test, y_test)
print(f"\nTest Accuracy: {accuracy:.4f}, Test Loss: {loss:.4f}")
# Plot accuracy and loss
plt.figure(figsize=(12, 5))
plt.subplot(1, 2, 1)
plt.plot(history.history['accuracy'], label='Train Accuracy')
plt.plot(history.history['val_accuracy'], label='Test Accuracy')
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.title("Model Accuracy")
plt.subplot(1, 2, 2)
plt.plot(history.history['loss'], label='Train Loss')
plt.plot(history.history['val_loss'], label='Test Loss')
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.title("Model Loss")
plt.show()

8
# Predictions & Confusion Matrix
y_pred = model.predict(x_test)
y_pred_classes = np.argmax(y_pred, axis=1)
y_true = np.argmax(y_test, axis=1)
conf_matrix = confusion_matrix(y_true, y_pred_classes)
# Plot confusion matrix
plt.figure(figsize=(10, 8))
sns.heatmap(conf_matrix, cmap="Blues", annot=False, fmt="d")
plt.xlabel("Predicted")
plt.ylabel("Actual")
plt.title("Confusion Matrix")
plt.show()
# Classification Report
print("\nClassification Report:\n", classification_report(y_true, y_pred_classes))

Output:

Training samples: 8982, Testing samples: 2246


Input Shape: (8982, 200), Output Shape: (8982, 46)

Epoch 1/10
141/141 [==============================] - 25s 162ms/step - loss: 3.1582
- accuracy: 0.3657 - val_loss: 2.4433 - val_accuracy: 0.5085
Epoch 2/10
141/141 [==============================] - 21s 151ms/step - loss: 2.1416
- accuracy: 0.5517 - val_loss: 1.8994 - val_accuracy: 0.5940
...
Epoch 10/10
141/141 [==============================] - 21s 150ms/step - loss: 1.0083
- accuracy: 0.7702 - val_loss: 1.3945 - val_accuracy: 0.6926

Test Accuracy: 0.6926, Test Loss: 1.3945

Classification Report:
precision recall f1-score support

0 0.50 0.75 0.60 8


1 0.85 0.89 0.87 105
2 0.85 0.67 0.75 19
3 0.80 0.75 0.77 41
...
45 0.72 0.79 0.75 287
accuracy 0.69 2246
macro avg 0.68 0.67 0.67 2246
weighted avg 0.69 0.69 0.69 2246

9
4. Construct a neural network model from scratchto accurately
predict house prices, covering aspects such as data preprocessing,
feature engineering, model architecture selection, hyperparameter
optimization, and robust evaluation methodologies.

Program :

import pandas as pd
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import mean_absolute_error, mean_squared_error
import matplotlib.pyplot as plt
# Load dataset (using California housing dataset from sklearn)
from sklearn.datasets import fetch_california_housing
data = fetch_california_housing()
df = pd.DataFrame(data.data, columns=data.feature_names)
df['Target'] = data.target # House prices
# Display first few rows
print(df.head())
# Split dataset into features and target variable
X = df.drop(columns=['Target'])
y = df['Target']
# Train-Test Split (80% training, 20% testing)
X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.2,random_state=42
)
# Standardize features (important for neural networks)
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)
# Define Neural Network Model
def build_model():
model = Sequential([
Dense(128, activation='relu', input_shape=(X_train_scaled.shape[1],)),
Dropout(0.2), # Regularization
Dense(64, activation='relu'),
Dropout(0.2),
Dense(32, activation='relu'),

10
Dense(1) # Regression output ])
model.compile(optimizer='adam', loss='mse', metrics=['mae'])
return model
# Train Model
model = build_model()
history = model.fit(X_train_scaled, y_train, epochs=100, batch_size=32,
validation_data=(X_test_scaled, y_test), verbose=1)
# Evaluate Model
test_loss, test_mae = model.evaluate(X_test_scaled, y_test)
y_pred = model.predict(X_test_scaled)
# Print Evaluation Metrics
mse = mean_squared_error(y_test, y_pred)
mae = mean_absolute_error(y_test, y_pred)
print(f"\nTest MAE: {test_mae:.4f}")
print(f"Test MSE: {mse:.4f}")
# Plot Training History
plt.figure(figsize=(12, 5))
plt.subplot(1, 2, 1)
plt.plot(history.history['mae'], label='Train MAE')
plt.plot(history.history['val_mae'], label='Test MAE')
plt.xlabel("Epochs")
plt.ylabel("Mean Absolute Error")
plt.legend()
plt.title("MAE over Epochs")
plt.subplot(1, 2, 2)
plt.plot(history.history['loss'], label='Train Loss')
plt.plot(history.history['val_loss'], label='Test Loss')
plt.xlabel("Epochs")
plt.ylabel("Loss (MSE)")
plt.legend()
plt.title("Loss over Epochs")
plt.show()
# Compare Predictions vs Actual Prices
plt.figure(figsize=(8, 6))
plt.scatter(y_test, y_pred, alpha=0.5)
plt.xlabel("Actual Prices")
plt.ylabel("Predicted Prices")
plt.title("Actual vs Predicted House Prices")
plt.show()
Output:

MedInc HouseAge AveRooms AveBedrms Population AveOccup Latitude


0 8.3252 41.0 6.984127 1.023810 322.0 2.555556 37.88

11
1 8.3014 21.0 6.238137 0.971880 240.0 2.109842 37.86
Longitude Target
-122.23 4.5260
-122.22 3.5850

Epoch 1/100
516/516 [==============================] - 2s 3ms/step - loss: 1.2554 -
mae: 0.7941 - val_loss: 0.5392 - val_mae: 0.5142
...
Epoch 100/100
516/516 [==============================] - 1s 2ms/step - loss: 0.2241 -
mae: 0.3321 - val_loss: 0.2710 - val_mae: 0.3537

Test MAE: 0.3537


Test MSE: 0.2710

5. Construct a neural network from scratch in Python to perform


digit recognition using the MNIST dataset, considering factors such
as data preprocessing, model architecture design, hyper parameter
tuning, and evaluation techniques. Implement and Analyse different
loss functions and optimizers for the same .
Program :

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Flatten, Dropout
from tensorflow.keras.optimizers import Adam, SGD
from tensorflow.keras.losses import SparseCategoricalCrossentropy,
CategoricalCrossentrop
from tensorflow.keras.utils import to_categorical
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.metrics import confusion_matrix, classification_report
# Load the MNIST dataset
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()

12
# Normalize pixel values (0-255 → 0-1)
x_train, x_test = x_train / 255.0, x_test / 255.0
# Convert labels to one-hot encoding for categorical crossentropy
y_train_onehot = to_categorical(y_train, 10)
y_test_onehot = to_categorical(y_test, 10)
# Display first few images
plt.figure(figsize=(8, 5))
for i in range(6):
plt.subplot(2, 3, i + 1)
plt.imshow(x_train[i], cmap='gray')
plt.title(f"Label: {y_train[i]}")
plt.axis('off')
plt.show()
# Define the Neural Network Model
def build_model(optimizer='adam', loss_function='categorical_crossentropy'):
model = Sequential([
Flatten(input_shape=(28, 28)), # Flatten 28x28 images
Dense(128, activation='relu'),
Dropout(0.3), # Regularization
Dense(64, activation='relu'),
Dense(10, activation='softmax') # 10 output classes
])
model.compile(optimizer=optimizer, loss=loss_function, metrics=['accuracy'])
return model
# Train and Evaluate Model with Adam & Categorical Crossentropy
model_adam=build_model(optimizer=Adam(),
loss_function=CategoricalCrossentropy())
history_adam = model_adam.fit(x_train, y_train_onehot, epochs=10,
batch_size=32, validation_data=(x_test, y_test_onehot))
# Train and Evaluate Model with SGD & Sparse Categorical Crossentropy

13
model_sgd=build_model(optimizer=SGD(),
loss_function=SparseCategoricalCrossentropy())
history_sgd = model_sgd.fit(x_train, y_train, epochs=10, batch_size=32,
validation_data=(x_test, y_test))
# Evaluate Models
loss_adam, acc_adam = model_adam.evaluate(x_test, y_test_onehot)
loss_sgd, acc_sgd = model_sgd.evaluate(x_test, y_test)
print(f"\nAdam Optimizer - Accuracy: {acc_adam:.4f}, Loss: {loss_adam:.4f}")
print(f"SGD Optimizer - Accuracy: {acc_sgd:.4f}, Loss: {loss_sgd:.4f}")
# Plot Accuracy & Loss Comparison
plt.figure(figsize=(12, 5))
plt.subplot(1, 2, 1)
plt.plot(history_adam.history['accuracy'], label='Adam - Train Acc')
plt.plot(history_adam.history['val_accuracy'], label='Adam - Test Acc')
plt.plot(history_sgd.history['accuracy'], label='SGD - Train Acc')
plt.plot(history_sgd.history['val_accuracy'], label='SGD - Test Acc')
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.title("Optimizer Comparison - Accuracy")
plt.subplot(1, 2, 2)
plt.plot(history_adam.history['loss'], label='Adam - Train Loss')
plt.plot(history_adam.history['val_loss'], label='Adam - Test Loss')
plt.plot(history_sgd.history['loss'], label='SGD - Train Loss')
plt.plot(history_sgd.history['val_loss'], label='SGD - Test Loss')
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.title("Optimizer Comparison - Loss")
plt.show()
# Predictions and Confusion Matrix for Adam Model

14
y_pred = model_adam.predict(x_test)
y_pred_classes = np.argmax(y_pred, axis=1)
conf_matrix = confusion_matrix(y_test, y_pred_classes)
plt.figure(figsize=(10, 8))
sns.heatmap(conf_matrix, annot=True, fmt="d", cmap="Blues")
plt.xlabel("Predicted")
plt.ylabel("Actual")
plt.title("Confusion Matrix - Adam Optimizer")
plt.show()
# Classification Report
print("\nClassification Report:\n", classification_report(y_test, y_pred_classes))

Output:
Epoch 1/10
1875/1875 [==============================] - 6s 3ms/step - loss: 0.3052
- accuracy: 0.9092 - val_loss: 0.1381 - val_accuracy: 0.9582
...
Epoch 10/10
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0608
- accuracy: 0.9818 - val_loss: 0.0731 - val_accuracy: 0.9774

Adam Optimizer - Accuracy: 0.9774, Loss: 0.0731


SGD Optimizer - Accuracy: 0.9286, Loss: 0.2031

15

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy