0% found this document useful (0 votes)
68 views2 pages

Introduction To Neural Networks 67103 - 2019 Exam B

This document contains questions about neural networks including topics like pooling operations, receptive fields, stochastic gradient descent, discontinuities in functions, generative adversarial networks, residual blocks, global average pooling, class activation maps, and autoencoders.

Uploaded by

Nadav Goldstein
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
68 views2 pages

Introduction To Neural Networks 67103 - 2019 Exam B

This document contains questions about neural networks including topics like pooling operations, receptive fields, stochastic gradient descent, discontinuities in functions, generative adversarial networks, residual blocks, global average pooling, class activation maps, and autoencoders.

Uploaded by

Nadav Goldstein
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Introduction to Neural Networks 67103

Moed B, 4/3/2018, 2 hours


Answer all 10 ques ons

1. Is the uniform pooling operation a linear operator? Is the max-pooling linear? Explain your
answer.
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________

2. In a fully-convolutional network, how many layers are needed in order to get the the receptive
field of the last-layer neurons to cover the entire image (assuming a x2 pooling operations
between every two convolutional layers). Explain your answer.
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________

3. True/Untrue:
a. Gradient descent may converge to a global minimum
b. The backpropagation algorithm computes the loss’s derivative with respect to every network
parameter (weights+biases)
c. In Stochastic Gradient Descent we add noise to the exact gradient

4. True/Untrue:
a. The composition of an a-sawtooth with a b-sawtooth function has at most a+b discontinuities
b. The number of discontinuities of a deep network may grow like N d where d is the depth
c. Shallow networks with O(n) neurons, can always express deep networks with n neurons

5. Write down the Bayes rule that the Generative Stochastic Network method uses, and describe
what are the two sampling steps it performs in order to sample images.
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
6. Write down the loss used when training Generative Adversarial Networks (define notations).
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________

7. How many 3x3 convolution layers are needed to achieve the same receptive field as a single 7x7
layer? Explain the advantage(s) of using multiple 3x3 layers.
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________

8. Briefly explain the reasoning/intuition behind the residual blocks used in ResNet, and how it is
implemented in practice.
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________

9. What is Global Average Pooling (GAP)? How is it used in the computation of Class Activation
Maps (CAMs)?
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________

10. Consider an autoencoder whose architecture does not force dimensionality reduction (layer sizes
do not decrease below that of the input). Explain how is it still possible to train it to extract useful
information from the training set?
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________

!‫בהצלחה‬

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy