0% found this document useful (0 votes)
8 views3 pages

Dhanashree ML Report

The document outlines the implementation of the K-Nearest Neighbour (KNN) algorithm, focusing on calculating precision, accuracy, F1 score, and recall score using Google Colab. It includes a theoretical overview of KNN as a supervised learning technique, along with a code example using a synthetic dataset. The conclusion emphasizes KNN's value in machine learning for its interpretability and effectiveness with well-preprocessed data.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views3 pages

Dhanashree ML Report

The document outlines the implementation of the K-Nearest Neighbour (KNN) algorithm, focusing on calculating precision, accuracy, F1 score, and recall score using Google Colab. It includes a theoretical overview of KNN as a supervised learning technique, along with a code example using a synthetic dataset. The conclusion emphasizes KNN's value in machine learning for its interpretability and effectiveness with well-preprocessed data.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Name: Dhanashree Ankush Ghodake

Div:A Roll no:A14


Aim: Implementaion of K-Nearest Neighbour(KNN)
Objective: 1.Calculation Precission
2. Calculation Accuracy
3. Calculation F1 Score
4. Calculation Recall Score

Requirement: Google Colab


Theory:
K-Nearest Neighbour is one of the simplest Machine Learning algorithms based on
supervised learning technique. K-NN algorithm assumes the similarity between
the new case/data and available cases and put the new case into the category
that is most similar to the available categories. K-NN algorithm stores all the
available data and classifies a new data point based on the similarity.

Code:
import pandas as pd

from sklearn.model_selection import train_test_split

from sklearn.neighbors import KNeighborsClassifier

from sklearn.metrics import accuracy_score, confusion_matrix

import matplotlib.pyplot as plt

# Example using a synthetic dataset

data = {

'feature1': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10],

'feature2': [2, 4, 6, 8, 10, 12, 14, 16, 18, 20],

'target': ['A', 'A', 'A', 'A', 'A', 'B', 'B', 'B', 'B', 'B']
}

df = pd.DataFrame(data)

print(df.head()) # Display the first few rows

X = df[['feature1', 'feature2']]

y = df['target']

knn

# ... (previous code) ...

knn = KNeighborsClassifier(n_neighbors=3) # Adjust the number of neighbors


(k)

# Fit the model to the training data

knn.fit(X_train, y_train) # This line is added

y_pred = knn.predict(X_test)

accuracy = accuracy_score(y_test, y_pred)

print(f"Accuracy: {accuracy}")

cm = confusion_matrix(y_test, y_pred)

print(f"Confusion Matrix:\n{cm}")

# Example: Scatter plot of data and decision boundaries

plt.figure(figsize=(8, 6))

plt.scatter(df['feature1'], df['feature2'], c=df['target'].map({'A':


'red', 'B': 'blue'}), marker='o')

plt.xlabel('Feature 1')

plt.ylabel('Feature 2')

plt.title('KNN Classification')

plt.show()
Output :

Conclusion:
I able to know about the KNN is valuable tool in machine learining due to its
interpretability and effectiveness, especially when used with carefully
preprocessed data.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy