FVBSN
FVBSN
Learning Mechanisms
SIMPLEST EXPLANATION :
2. Sigmoid Function :
The model uses a special mathematical function called a sigmoid to decide how
likely something is. The sigmoid takes a number and turns it into a probability
between 0 and 1. For example, it might predict that a pixel in an image has an 80%
chance of being white.
3. Generative Model :
This means the model can not only analyze data but also create new data that
looks similar to what it has learned. For example, if you train it on pictures of cats,
it can generate new cat-like images.
Example Analogy:
Think of FVSBN like filling in a coloring book:
- You have a partially drawn picture (your visible data).
- The FVSBN looks at the patterns and probabilities (using the sigmoid function)
to guess what colors or shapes should go in the blank spaces.
- It’s straightforward because you’re working directly with the picture you can see
—no hidden tricks.
Citations:
[1] https://www.youtube.com/watch?v=HacQtntlLcw
[2] https://ai.stackexchange.com/questions/41930/what-does-fully-visible-belief-
network-stand-for
[3]
http://www.cs.toronto.edu/~bonner/courses/2016s/csc321/readings/Connectionist
%20learning%20of%20belief%20networks.pdf
[4] http://proceedings.mlr.press/v38/gan15.pdf
[5]
https://www.cs.cmu.edu/~epxing/Class/10708-16/note/10708_scribe_lecture26.pdf
[6] https://deep-generative-models.github.io/files/ppt/2021/Lecture
%205%20Autoregressive%20Models.pdf
[7] https://en.wikipedia.org/wiki/Deep_belief_network
[8] https://arindam.cs.illinois.edu/courses/f21cs598/slides/ar11_598f21.pdf
The main difference between explicit density models and implicit
density models is that explicit density models, use an explicit density function and
implicit density models don't. In other words explicit models assume some prior
distribution about the data.