0% found this document useful (0 votes)
18 views6 pages

Practical No12

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views6 pages

Practical No12

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

practical-no12

October 15, 2024

[1]: import numpy as np


from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Embedding, LSTM, GRU, Dense
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
from tensorflow.keras.utils import to_categorical

WARNING:tensorflow:From C:\Users\shivr\anaconda3\Lib\site-
packages\keras\src\losses.py:2976: The name
tf.losses.sparse_softmax_cross_entropy is deprecated. Please use
tf.compat.v1.losses.sparse_softmax_cross_entropy instead.

[2]: # Example preprocessed dataset


sentences = [
"The bank was flooded with water after the storm.",
"He deposited the money at the bank."
]
labels = ["river_bank", "financial_institution"] # Word senses for "bank"

[3]: # Tokenization and preprocessing


vocab_size = 10000 # Just an example size, adjust as needed
max_length = 10 # Max length of a sentence

# Assuming sentences are tokenized and converted to integers


X_data = np.random.randint(vocab_size, size=(len(sentences), max_length))

# Label encoding for senses


label_encoder = LabelEncoder()
y_data = label_encoder.fit_transform(labels)
y_data = to_categorical(y_data) # One-hot encoding for categorical output

[4]: # Split into train and test sets


X_train, X_test, y_train, y_test = train_test_split(X_data, y_data, test_size=0.
↪2)

# Define the model (LSTM or GRU)

1
def build_model(rnn_type='LSTM'):
model = Sequential()
model.add(Embedding(input_dim=vocab_size, output_dim=64,␣
↪input_length=max_length))

if rnn_type == 'LSTM':
model.add(LSTM(128, return_sequences=False))
elif rnn_type == 'GRU':
model.add(GRU(128, return_sequences=False))

model.add(Dense(y_data.shape[1], activation='softmax')) # Output layer for␣


↪sense classification

# Compile the model


model.compile(optimizer='adam', loss='categorical_crossentropy',␣
↪metrics=['accuracy'])

return model

[5]: # Build and train the LSTM model


model = build_model(rnn_type='LSTM')
model.summary()

WARNING:tensorflow:From C:\Users\shivr\anaconda3\Lib\site-
packages\keras\src\backend.py:873: The name tf.get_default_graph is deprecated.
Please use tf.compat.v1.get_default_graph instead.

WARNING:tensorflow:From C:\Users\shivr\anaconda3\Lib\site-
packages\keras\src\optimizers\__init__.py:309: The name tf.train.Optimizer is
deprecated. Please use tf.compat.v1.train.Optimizer instead.

Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding (Embedding) (None, 10, 64) 640000

lstm (LSTM) (None, 128) 98816

dense (Dense) (None, 2) 258

=================================================================
Total params: 739074 (2.82 MB)
Trainable params: 739074 (2.82 MB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________

2
[6]: # Train the model
model.fit(X_train, y_train, epochs=50, batch_size=32)

Epoch 1/50
WARNING:tensorflow:From C:\Users\shivr\anaconda3\Lib\site-
packages\keras\src\utils\tf_utils.py:492: The name tf.ragged.RaggedTensorValue
is deprecated. Please use tf.compat.v1.ragged.RaggedTensorValue instead.

WARNING:tensorflow:From C:\Users\shivr\anaconda3\Lib\site-
packages\keras\src\engine\base_layer_utils.py:384: The name
tf.executing_eagerly_outside_functions is deprecated. Please use
tf.compat.v1.executing_eagerly_outside_functions instead.

1/1 [==============================] - 3s 3s/step - loss: 0.7011 - accuracy:


0.0000e+00
Epoch 2/50
1/1 [==============================] - 0s 14ms/step - loss: 0.6783 - accuracy:
1.0000
Epoch 3/50
1/1 [==============================] - 0s 10ms/step - loss: 0.6556 - accuracy:
1.0000
Epoch 4/50
1/1 [==============================] - 0s 11ms/step - loss: 0.6317 - accuracy:
1.0000
Epoch 5/50
1/1 [==============================] - 0s 11ms/step - loss: 0.6053 - accuracy:
1.0000
Epoch 6/50
1/1 [==============================] - 0s 11ms/step - loss: 0.5753 - accuracy:
1.0000
Epoch 7/50
1/1 [==============================] - 0s 10ms/step - loss: 0.5406 - accuracy:
1.0000
Epoch 8/50
1/1 [==============================] - 0s 10ms/step - loss: 0.5000 - accuracy:
1.0000
Epoch 9/50
1/1 [==============================] - 0s 10ms/step - loss: 0.4523 - accuracy:
1.0000
Epoch 10/50
1/1 [==============================] - 0s 10ms/step - loss: 0.3966 - accuracy:
1.0000
Epoch 11/50
1/1 [==============================] - 0s 10ms/step - loss: 0.3329 - accuracy:
1.0000
Epoch 12/50
1/1 [==============================] - 0s 10ms/step - loss: 0.2620 - accuracy:

3
1.0000
Epoch 13/50
1/1 [==============================] - 0s 11ms/step - loss: 0.1877 - accuracy:
1.0000
Epoch 14/50
1/1 [==============================] - 0s 10ms/step - loss: 0.1169 - accuracy:
1.0000
Epoch 15/50
1/1 [==============================] - 0s 8ms/step - loss: 0.0593 - accuracy:
1.0000
Epoch 16/50
1/1 [==============================] - 0s 10ms/step - loss: 0.0228 - accuracy:
1.0000
Epoch 17/50
1/1 [==============================] - 0s 10ms/step - loss: 0.0063 - accuracy:
1.0000
Epoch 18/50
1/1 [==============================] - 0s 10ms/step - loss: 0.0013 - accuracy:
1.0000
Epoch 19/50
1/1 [==============================] - 0s 9ms/step - loss: 2.1324e-04 -
accuracy: 1.0000
Epoch 20/50
1/1 [==============================] - 0s 9ms/step - loss: 3.4212e-05 -
accuracy: 1.0000
Epoch 21/50
1/1 [==============================] - 0s 11ms/step - loss: 6.1989e-06 -
accuracy: 1.0000
Epoch 22/50
1/1 [==============================] - 0s 11ms/step - loss: 1.4305e-06 -
accuracy: 1.0000
Epoch 23/50
1/1 [==============================] - 0s 9ms/step - loss: 3.5763e-07 -
accuracy: 1.0000
Epoch 24/50
1/1 [==============================] - 0s 10ms/step - loss: 1.1921e-07 -
accuracy: 1.0000
Epoch 25/50
1/1 [==============================] - 0s 9ms/step - loss: 1.1921e-07 -
accuracy: 1.0000
Epoch 26/50
1/1 [==============================] - 0s 10ms/step - loss: 0.0000e+00 -
accuracy: 1.0000
Epoch 27/50
1/1 [==============================] - 0s 11ms/step - loss: 0.0000e+00 -
accuracy: 1.0000
Epoch 28/50
1/1 [==============================] - 0s 112ms/step - loss: 0.0000e+00 -

4
accuracy: 1.0000
Epoch 29/50
1/1 [==============================] - 0s 10ms/step - loss: 0.0000e+00 -
accuracy: 1.0000
Epoch 30/50
1/1 [==============================] - 0s 25ms/step - loss: 0.0000e+00 -
accuracy: 1.0000
Epoch 31/50
1/1 [==============================] - 0s 9ms/step - loss: 0.0000e+00 -
accuracy: 1.0000
Epoch 32/50
1/1 [==============================] - 0s 8ms/step - loss: 0.0000e+00 -
accuracy: 1.0000
Epoch 33/50
1/1 [==============================] - 0s 9ms/step - loss: 0.0000e+00 -
accuracy: 1.0000
Epoch 34/50
1/1 [==============================] - 0s 9ms/step - loss: 0.0000e+00 -
accuracy: 1.0000
Epoch 35/50
1/1 [==============================] - 0s 9ms/step - loss: 0.0000e+00 -
accuracy: 1.0000
Epoch 36/50
1/1 [==============================] - 0s 9ms/step - loss: 0.0000e+00 -
accuracy: 1.0000
Epoch 37/50
1/1 [==============================] - 0s 8ms/step - loss: 0.0000e+00 -
accuracy: 1.0000
Epoch 38/50
1/1 [==============================] - 0s 8ms/step - loss: 0.0000e+00 -
accuracy: 1.0000
Epoch 39/50
1/1 [==============================] - 0s 9ms/step - loss: 0.0000e+00 -
accuracy: 1.0000
Epoch 40/50
1/1 [==============================] - 0s 8ms/step - loss: 0.0000e+00 -
accuracy: 1.0000
Epoch 41/50
1/1 [==============================] - 0s 10ms/step - loss: 0.0000e+00 -
accuracy: 1.0000
Epoch 42/50
1/1 [==============================] - 0s 9ms/step - loss: 0.0000e+00 -
accuracy: 1.0000
Epoch 43/50
1/1 [==============================] - 0s 8ms/step - loss: 0.0000e+00 -
accuracy: 1.0000
Epoch 44/50
1/1 [==============================] - 0s 11ms/step - loss: 0.0000e+00 -

5
accuracy: 1.0000
Epoch 45/50
1/1 [==============================] - 0s 9ms/step - loss: 0.0000e+00 -
accuracy: 1.0000
Epoch 46/50
1/1 [==============================] - 0s 8ms/step - loss: 0.0000e+00 -
accuracy: 1.0000
Epoch 47/50
1/1 [==============================] - 0s 10ms/step - loss: 0.0000e+00 -
accuracy: 1.0000
Epoch 48/50
1/1 [==============================] - 0s 8ms/step - loss: 0.0000e+00 -
accuracy: 1.0000
Epoch 49/50
1/1 [==============================] - 0s 8ms/step - loss: 0.0000e+00 -
accuracy: 1.0000
Epoch 50/50
1/1 [==============================] - 0s 10ms/step - loss: 0.0000e+00 -
accuracy: 1.0000

[6]: <keras.src.callbacks.History at 0x1ed10b4dc90>

[7]: # Evaluate the model


loss, accuracy = model.evaluate(X_test, y_test)
print(f"Test Accuracy: {accuracy:.2f}")

1/1 [==============================] - 1s 638ms/step - loss: 18.4027 - accuracy:


0.0000e+00
Test Accuracy: 0.00

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy