Bayesian Networks 3 - Maximum Likelihood | Stanford CS221: AI (Autumn 2019)

Bayesian Networks 3 - Maximum Likelihood | Stanford CS221: AI (Autumn 2019)

19.639 Lượt nghe
Bayesian Networks 3 - Maximum Likelihood | Stanford CS221: AI (Autumn 2019)
For more information about Stanford’s Artificial Intelligence professional and graduate programs, visit: https://stanford.io/2Zlc5Iu Topics: Bayesian Networks Percy Liang, Associate Professor & Dorsa Sadigh, Assistant Professor - Stanford University http://onlinehub.stanford.edu/ Associate Professor Percy Liang Associate Professor of Computer Science and Statistics (courtesy) https://profiles.stanford.edu/percy-liang Assistant Professor Dorsa Sadigh Assistant Professor in the Computer Science Department & Electrical Engineering Department https://profiles.stanford.edu/dorsa-sadigh To follow along with the course schedule and syllabus, visit: https://stanford-cs221.github.io/autumn2019/#schedule 0:00 Introduction 0:18 Announcements 2:00 Review: Bayesian network 2:57 Review: probabilistic inference 4:13 Where do parameters come from? 4:37 Roadmap 5:02 Learning task 6:29 Example: one variable 12:06 Example: v-structure 14:48 Example: inverted-v structure 20:28 Parameter sharing 21:35 Example: Naive Bayes 26:05 Example: HMMS 33:40 General case: learning algorithm 36:14 Maximum likelihood 41:05 Scenario 2 42:45 Regularization: Laplace smoothing 44:14 Example: two variables 49:09 Motivation 49:49 Maximum marginal likelihood 52:59 Expectation Maximization (EM)