Easy Pract ML
Easy Pract ML
Below are short and easy versions of each practical with built-in data so you don't
need to upload anything.
# Sample data
X = [[1], [2], [3], [4], [5]]
y = [15000, 25000, 35000, 45000, 55000]
model = LinearRegression()
model.fit(X, y)
plt.scatter(X, y, color='green')
plt.plot(X, model.predict(X), color='red')
plt.title("Experience vs Salary")
plt.xlabel("Years")
plt.ylabel("Salary")
plt.show()
model = LogisticRegression()
model.fit(X, y)
print("Pred:", model.predict([[28, 55000]]))
model = DecisionTreeClassifier()
model.fit(X, y)
tree.plot_tree(model)
✅ Practical 4: SVM
from sklearn.svm import SVC
X = [[1, 2], [2, 3], [3, 3], [6, 5], [7, 8]]
y = [0, 0, 0, 1, 1]
model = SVC(kernel='rbf')
model.fit(X, y)
print("Pred:", model.predict([[4, 4]]))
X = [[1, 0], [2, 0], [3, 1], [4, 1], [5, 1]]
y = [0, 0, 1, 1, 1]
model = GaussianNB()
model.fit(X, y)
print("Pred:", model.predict([[2, 1]]))
✅ Practical 6: PCA
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
X = [[2.5, 2.4], [0.5, 0.7], [2.2, 2.9], [1.9, 2.2], [3.1, 3.0]]
sc = StandardScaler()
X_scaled = sc.fit_transform(X)
pca = PCA(n_components=1)
X_pca = pca.fit_transform(X_scaled)
print("PCA Output:\n", X_pca)
✅ Practical 7: KNN
from sklearn.neighbors import KNeighborsClassifier
X = [[1, 2], [2, 3], [3, 4], [6, 5], [7, 7]]
y = [0, 0, 0, 1, 1]
model = KNeighborsClassifier(n_neighbors=3)
model.fit(X, y)
print("Pred:", model.predict([[4, 4]]))
Let me know if you want all these in one file or with explanations too!
✅ Step-by-step plan:
1. Read and learn the exact code from your PDF (already done).
2. Keep the logic the same.
3. Simplify the code:
o Use shorter, meaningful variable names.
o Remove unnecessary steps or comments.
o Keep the dataset inside the code, not in files.
o Keep it accurate, but beginner-friendly.
# Train model
model = LinearRegression()
model.fit(exp, sal)
# Predict
pred = model.predict([[4]])
print("Predicted salary for 4 years experience:", pred[0])
# Plot
plt.scatter(exp, sal, color='blue')
plt.plot(exp, model.predict(exp), color='red')
plt.scatter(4, pred[0], color='green')
plt.title('Experience vs Salary')
plt.xlabel('Experience (Years)')
plt.ylabel('Salary')
plt.grid(True)
plt.show()
✅ Practical 2: Logistic Regression – Simplified
from sklearn.linear_model import LogisticRegression
import matplotlib.pyplot as plt
# Train model
model = LogisticRegression()
model.fit(data, buy)
# Predict
test = [[30, 87000]]
result = model.predict(test)
print("Prediction for age 30 and salary 87000:", result[0])
Have short and meaningful variables like exp, sal, buy, etc.
Use direct dataset values, not external files.
Keep the same logic as your PDF.
Include simple and clean graphs for better understanding.
If you share more practicals in a PDF, I’ll repeat this same simplification style for all of them.
Want me to wait for the next one?
Great! Here are simple Python programs using tiny datasets for each of the listed experiments.
Each program is beginner-friendly and uses built-in libraries like sklearn and numpy.
1. Linear Regression
from sklearn.linear_model import LinearRegression
model = LinearRegression()
model.fit(X, y)
2. Logistic Regression
from sklearn.linear_model import LogisticRegression
model = LogisticRegression()
model.fit(X, y)
model = DecisionTreeClassifier()
model.fit(X, y)
model = svm.SVC()
model.fit(X, y)
model = GaussianNB()
model.fit(X, y)
pca = PCA(n_components=2)
reduced_X = pca.fit_transform(X)
model = KNeighborsClassifier(n_neighbors=1)
model.fit(X, y)
model = SVC(kernel='rbf')
model.fit(X, y)