Introduction
Introduction
What is the K-Nearest Neighbours (KNN) algorithm, and What are some strategies for improving
the efficiency of KNN for large datasets?
2. How does a perceptron, as a building block of neural networks, contribute to the interpretability of
a network's decision-making process? Discuss how the transparency of a single perceptron compares
to that of a multi-layer deep network.
3. How does overfitting affect the performance of a perceptron model? Discuss how a perceptron
might fail to generalize well on unseen data despite achieving high accuracy on the training set.
4. How can missing data be handled in classification tasks? Discuss methods like imputation,
deletion, or using algorithms that handle missing values naturally (e.g., Decision Trees).
5. How does batch normalization affect the backpropagation process? Discuss how it improves the
training speed and stability of deep neural networks.?
6. What is Entropy in the context of information theory? How is it used to measure the impurity or
uncertainty of a dataset? Provide an example of how entropy can be calculated for a binary
classification problem.
7. How does the decision tree algorithm recursively split data to form the tree? What criteria are
used to determine the best feature and the best threshold for splitting at each node?
8. Explain the architecture of a single-layer perceptron. How does it differ from more complex neural
network architectures, and what are its limitations in terms of expressiveness and learning
capability?
9. What strategies can be employed during backpropagation to efficiently train deep neural networks
with millions of parameters? Discuss the importance of weight initialization, learning rate scheduling,
and regularization techniques in this context.
10. How does classification differ from regression in machine learning? Provide examples of real-
world problems where classification would be more appropriate than regression.