0% found this document useful (0 votes)
12 views2 pages

Experiment 3

The document outlines a Python script that implements a decision tree classifier on the Iris dataset, including data loading, splitting, hyperparameter tuning using GridSearchCV, and model evaluation. The best hyperparameters found were a maximum depth of 4, a minimum samples split of 10, and the Gini criterion, resulting in a perfect accuracy of 1.0 on the test set. Additionally, the decision tree is visualized to illustrate the model's structure.

Uploaded by

Rishab
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views2 pages

Experiment 3

The document outlines a Python script that implements a decision tree classifier on the Iris dataset, including data loading, splitting, hyperparameter tuning using GridSearchCV, and model evaluation. The best hyperparameters found were a maximum depth of 4, a minimum samples split of 10, and the Gini criterion, resulting in a perfect accuracy of 1.0 on the test set. Additionally, the decision tree is visualized to illustrate the model's structure.

Uploaded by

Rishab
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

import numpy as np

import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.tree import DecisionTreeClassifier, plot_tree
from sklearn.metrics import classification_report, accuracy_score

# Load dataset
iris = load_iris()
X = iris.data
y = iris.target
feature_names = iris.feature_names
class_names = iris.target_names

# Split into train and test sets


X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.3, random_state=42)

# Define the base model


dtree = DecisionTreeClassifier(random_state=42)

# Define the hyperparameter grid


param_grid = {
'max_depth': [2, 3, 4, 5],
'min_samples_split': [2, 5, 10],
'criterion': ['gini', 'entropy']
}

# Grid search for hyperparameter tuning


grid_search = GridSearchCV(dtree, param_grid, cv=5)
grid_search.fit(X_train, y_train)

# Best model from grid search


best_tree = grid_search.best_estimator_

# Predict on test set


y_pred = best_tree.predict(X_test)

# Evaluate the model


print("Best Hyperparameters:", grid_search.best_params_)
print("\nClassification Report:")
print(classification_report(y_test, y_pred))
print("Accuracy:", accuracy_score(y_test, y_pred))

# Visualize the decision tree


plt.figure(figsize=(16, 10))
plot_tree(best_tree,
feature_names=feature_names,
class_names=class_names,
filled=True,
rounded=True)
plt.title("Decision Tree Visualization (Best Model)")
plt.show()

Best Hyperparameters: {'criterion': 'gini', 'max_depth': 4,


'min_samples_split': 10}

Classification Report:
precision recall f1-score support

0 1.00 1.00 1.00 19


1 1.00 1.00 1.00 13
2 1.00 1.00 1.00 13

accuracy 1.00 45
macro avg 1.00 1.00 1.00 45
weighted avg 1.00 1.00 1.00 45

Accuracy: 1.0

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy