0% found this document useful (0 votes)
4 views7 pages

Easy Pract ML

The document provides simplified Python code examples for various machine learning practices using libraries like sklearn. Each practical demonstrates a different algorithm, including Linear Regression, Logistic Regression, Decision Trees, SVM, Naive Bayes, PCA, KNN, and RBF SVM, with built-in datasets for easy understanding. The code is designed to be beginner-friendly, with clear variable names and straightforward logic.

Uploaded by

priya31patil0504
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views7 pages

Easy Pract ML

The document provides simplified Python code examples for various machine learning practices using libraries like sklearn. Each practical demonstrates a different algorithm, including Linear Regression, Logistic Regression, Decision Trees, SVM, Naive Bayes, PCA, KNN, and RBF SVM, with built-in datasets for easy understanding. The code is designed to be beginner-friendly, with clear variable names and straightforward logic.

Uploaded by

priya31patil0504
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 7

Great!

Below are short and easy versions of each practical with built-in data so you don't
need to upload anything.

✅ Practical 1: Linear Regression (Predict Salary from Experience)


from sklearn.linear_model import LinearRegression
import matplotlib.pyplot as plt

# Sample data
X = [[1], [2], [3], [4], [5]]
y = [15000, 25000, 35000, 45000, 55000]

model = LinearRegression()
model.fit(X, y)

plt.scatter(X, y, color='green')
plt.plot(X, model.predict(X), color='red')
plt.title("Experience vs Salary")
plt.xlabel("Years")
plt.ylabel("Salary")
plt.show()

✅ Practical 2: Logistic Regression (Predict Purchase)


from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score

# Sample data: [Age, Salary], target: Buy (0 or 1)


X = [[22, 30000], [25, 32000], [30, 60000], [35, 58000], [40, 90000]]
y = [0, 0, 1, 1, 1]

model = LogisticRegression()
model.fit(X, y)
print("Pred:", model.predict([[28, 55000]]))

✅ Practical 3: Decision Tree


from sklearn.tree import DecisionTreeClassifier
from sklearn import tree

X = [[0, 0], [1, 1], [1, 0], [0, 1]]


y = [0, 1, 1, 0]

model = DecisionTreeClassifier()
model.fit(X, y)
tree.plot_tree(model)
✅ Practical 4: SVM
from sklearn.svm import SVC

X = [[1, 2], [2, 3], [3, 3], [6, 5], [7, 8]]
y = [0, 0, 0, 1, 1]

model = SVC(kernel='rbf')
model.fit(X, y)
print("Pred:", model.predict([[4, 4]]))

✅ Practical 5: Naive Bayes


from sklearn.naive_bayes import GaussianNB

X = [[1, 0], [2, 0], [3, 1], [4, 1], [5, 1]]
y = [0, 0, 1, 1, 1]

model = GaussianNB()
model.fit(X, y)
print("Pred:", model.predict([[2, 1]]))

✅ Practical 6: PCA
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler

X = [[2.5, 2.4], [0.5, 0.7], [2.2, 2.9], [1.9, 2.2], [3.1, 3.0]]
sc = StandardScaler()
X_scaled = sc.fit_transform(X)

pca = PCA(n_components=1)
X_pca = pca.fit_transform(X_scaled)
print("PCA Output:\n", X_pca)

✅ Practical 7: KNN
from sklearn.neighbors import KNeighborsClassifier

X = [[1, 2], [2, 3], [3, 4], [6, 5], [7, 7]]
y = [0, 0, 0, 1, 1]

model = KNeighborsClassifier(n_neighbors=3)
model.fit(X, y)
print("Pred:", model.predict([[4, 4]]))

✅ Practical 8: SVM with RBF (Radial Basis Function)


from sklearn.datasets import make_classification
from sklearn.svm import SVC
from sklearn.model_selection import train_test_split
X, y = make_classification(n_samples=50, n_features=2, n_classes=2)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)

model = SVC(kernel='rbf', gamma=0.1)


model.fit(X_train, y_train)
print("Test Accuracy:", model.score(X_test, y_test))

Let me know if you want all these in one file or with explanations too!

Perfectly clear now! Here's what I'll do:

✅ Step-by-step plan:

1. Read and learn the exact code from your PDF (already done).
2. Keep the logic the same.
3. Simplify the code:
o Use shorter, meaningful variable names.
o Remove unnecessary steps or comments.
o Keep the dataset inside the code, not in files.
o Keep it accurate, but beginner-friendly.

✅ Practical 1: Linear Regression – Simplified


from sklearn.linear_model import LinearRegression
import matplotlib.pyplot as plt

# Input: Experience and Salary


exp = [[1.1], [1.3], [1.5], [2.0], [2.2], [2.9], [3.0], [3.2], [3.2], [3.7]]
sal = [39343, 46205, 37731, 43525, 39891, 56642, 60150, 54445, 64445, 57189]

# Train model
model = LinearRegression()
model.fit(exp, sal)

# Predict
pred = model.predict([[4]])
print("Predicted salary for 4 years experience:", pred[0])

# Plot
plt.scatter(exp, sal, color='blue')
plt.plot(exp, model.predict(exp), color='red')
plt.scatter(4, pred[0], color='green')
plt.title('Experience vs Salary')
plt.xlabel('Experience (Years)')
plt.ylabel('Salary')
plt.grid(True)
plt.show()
✅ Practical 2: Logistic Regression – Simplified
from sklearn.linear_model import LogisticRegression
import matplotlib.pyplot as plt

# Input: Age, Salary and Purchased


data = [[19, 19000], [35, 20000], [26, 43000], [27, 57000], [19, 76000],
[27, 58000], [27, 84000], [32, 150000], [25, 33000], [35, 65000]]
buy = [0, 0, 0, 0, 0, 0, 0, 1, 0, 1]

# Train model
model = LogisticRegression()
model.fit(data, buy)

# Predict
test = [[30, 87000]]
result = model.predict(test)
print("Prediction for age 30 and salary 87000:", result[0])

# Plot: Age vs Purchased


age = [d[0] for d in data]
colors = ['red' if b == 0 else 'green' for b in buy]

plt.scatter(age, buy, color=colors)


plt.axhline(0.5, color='blue', linestyle='--')
plt.scatter(30, result[0], color='black')
plt.title('Age vs Purchase Prediction')
plt.xlabel('Age')
plt.ylabel('Purchased (0=No, 1=Yes)')
plt.grid(True)
plt.show()

✅ Both programs now:

 Have short and meaningful variables like exp, sal, buy, etc.
 Use direct dataset values, not external files.
 Keep the same logic as your PDF.
 Include simple and clean graphs for better understanding.

If you share more practicals in a PDF, I’ll repeat this same simplification style for all of them.
Want me to wait for the next one?

Great! Here are simple Python programs using tiny datasets for each of the listed experiments.
Each program is beginner-friendly and uses built-in libraries like sklearn and numpy.
1. Linear Regression
from sklearn.linear_model import LinearRegression

# Dataset: Hours studied vs Marks


X = [[1], [2], [3]]
y = [2, 4, 6]

model = LinearRegression()
model.fit(X, y)

print("Prediction for 4 hours:", model.predict([[4]]))

2. Logistic Regression
from sklearn.linear_model import LogisticRegression

# Dataset: Hours studied vs Pass(1)/Fail(0)


X = [[1], [2], [3], [4], [5]]
y = [0, 0, 1, 1, 1]

model = LogisticRegression()
model.fit(X, y)

print("Prediction for 2.5 hours:", model.predict([[2.5]]))

3. CART Decision Tree


from sklearn.tree import DecisionTreeClassifier

# Dataset: [Temperature, Humidity] vs Play (1: Yes, 0: No)


X = [[30, 70], [25, 80], [20, 90], [35, 60]]
y = [1, 1, 0, 0]

model = DecisionTreeClassifier()
model.fit(X, y)

print("Prediction for [28, 75]:", model.predict([[28, 75]]))

4. Support Vector Machine (SVM)


from sklearn import svm

# Dataset: [Height, Weight] vs Gender (0: Female, 1: Male)


X = [[150, 45], [160, 55], [170, 65], [180, 75]]
y = [0, 0, 1, 1]

model = svm.SVC()
model.fit(X, y)

print("Prediction for [165, 60]:", model.predict([[165, 60]]))


5. Bayesian Classification (GaussianNB)
from sklearn.naive_bayes import GaussianNB

# Dataset: [Age, Salary] vs Buys Product (0/1)


X = [[25, 50000], [30, 60000], [35, 70000], [40, 80000]]
y = [0, 0, 1, 1]

model = GaussianNB()
model.fit(X, y)

print("Prediction for [32, 65000]:", model.predict([[32, 65000]]))

6. PCA (Principal Component Analysis)


from sklearn.decomposition import PCA
import numpy as np

# Dataset: [Height, Weight, Age]


X = np.array([[160, 50, 20], [170, 60, 22], [180, 70, 24]])

pca = PCA(n_components=2)
reduced_X = pca.fit_transform(X)

print("Reduced Data:\n", reduced_X)

7. K-Nearest Neighbour (KNN)


from sklearn.neighbors import KNeighborsClassifier

# Dataset: [Marks in Math, Science] vs Pass/Fail


X = [[30, 40], [35, 45], [80, 85], [85, 90]]
y = [0, 0, 1, 1]

model = KNeighborsClassifier(n_neighbors=1)
model.fit(X, y)

print("Prediction for [33, 42]:", model.predict([[33, 42]]))

8. Radial Basis Function (RBF) Kernel with SVM


from sklearn.svm import SVC

# Dataset: [X, Y] points from two classes


X = [[1, 2], [2, 3], [3, 3], [6, 7], [7, 8], [8, 8]]
y = [0, 0, 0, 1, 1, 1]

model = SVC(kernel='rbf')
model.fit(X, y)

print("Prediction for [4, 5]:", model.predict([[4, 5]]))


Let me know if you'd like output screenshots, graph visualizations (e.g. for regression/decision
trees), or explanations for any of these.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy