Chevron Left
Sample-based Learning Methods に戻る

アルバータ大学(University of Alberta) による Sample-based Learning Methods の受講者のレビューおよびフィードバック

4.8
512件の評価
102件のレビュー

コースについて

In this course, you will learn about several algorithms that can learn near optimal policies based on trial and error interaction with the environment---learning from the agent’s own experience. Learning from actual experience is striking because it requires no prior knowledge of the environment’s dynamics, yet can still attain optimal behavior. We will cover intuitively simple but powerful Monte Carlo methods, and temporal difference learning methods including Q-learning. We will wrap up this course investigating how we can get the best of both worlds: algorithms that can combine model-based planning (similar to dynamic programming) and temporal difference updates to radically accelerate learning. By the end of this course you will be able to: - Understand Temporal-Difference learning and Monte Carlo as two strategies for estimating value functions from sampled experience - Understand the importance of exploration, when using sampled experience rather than dynamic programming sweeps within a model - Understand the connections between Monte Carlo and Dynamic Programming and TD. - Implement and apply the TD algorithm, for estimating value functions - Implement and apply Expected Sarsa and Q-learning (two TD methods for control) - Understand the difference between on-policy and off-policy control - Understand planning with simulated experience (as opposed to classic planning strategies) - Implement a model-based approach to RL, called Dyna, which uses simulated experience - Conduct an empirical study to see the improvements in sample efficiency when using Dyna...

人気のレビュー

KM

Jan 10, 2020

Really great resource to follow along the RL Book. IMP Suggestion: Do not skip the reading assignments, they are really helpful and following the videos and assignments becomes easy.

KN

Oct 03, 2019

Great course! The notebooks are a perfect level of difficulty for someone learning RL for the first time. Thanks Martha and Adam for all your work on this!! Great content!!

フィルター:

Sample-based Learning Methods: 26 - 50 / 100 レビュー

by Shashidhara K

Dec 12, 2019

This course required more work than the 1st in the series, (may be i took it lightly as the first was not that difficult). Request : Please include some worked examples (calculations) or include in graded/ungraded quiz, will be nice.

by LOS

Jan 21, 2020

Awesome! It is a pitty n-steps and eligibility traces were not included - felt like a huge gap. All the future chapters have a reference to the n-steps, and your understanding won't be complete unless you learn that as well.

by Kinal M

Jan 10, 2020

Really great resource to follow along the RL Book. IMP Suggestion: Do not skip the reading assignments, they are really helpful and following the videos and assignments becomes easy.

by Kyle N

Oct 03, 2019

Great course! The notebooks are a perfect level of difficulty for someone learning RL for the first time. Thanks Martha and Adam for all your work on this!! Great content!!

by Gordon L W C

Feb 15, 2020

The course is intermediate in difficulty. But it explains the concept very clearly for me to understand difference between different sample based learning methods.

by Art H

Apr 14, 2020

Well done. Follows Reinforcement Learning (Sutton/Barto) closely and explains topics well. Graded notebooks are invaluable in understanding the material well.

by Umut Z

Nov 23, 2019

Good balance of theory and programming assignments. I really like the weekly bonus videos with professors and developers. Recommend to everyone.

by DOMENICO P

Apr 19, 2020

One of most accurate, precise and well explained courses I have ever had with Coursera. Congratulations for teachers and course creators.

by 李谨杰

May 01, 2020

An excellent course!!!! This is the best course I have ever taken on Coursera! Thanks a lot to two supervisors and teaching assistants!

by Christian J R F

May 07, 2020

Excelent course, I would love to do some other exercises out of the grid world but in general the content is good and interesting.

by Antonis S

May 09, 2020

Very well prepared and interesting course! I will seek more for sure in the future! Thank you so much for offering this course!

by Kiara O

Jan 07, 2020

This course is well explained, easy to follow and made me understand much better the tabular RL methods. I liked it very much.

by John J

Apr 28, 2020

This second instalment in the reinforcement learning journey is amazing. Although you can get stuck sometimes in some places.

by nicole s

Feb 02, 2020

I like the teaching style the emphasis on understanding and the fruitful combination with the textbook. Highly recommended!

by Nikhil G

Nov 25, 2019

Excellent course companion to the textbook, clarifies many of the vague topics and gives good tests to ensure understanding

by Lik M C

Jan 10, 2020

Again, the course is excellent. The assignments are even better than Course 1. A really great course worth to take!

by Zhang d

Apr 07, 2020

It is a wonderful and meanningful course, which can teach us the knowledge of Q-learning, expected Sarsa and so on.

by Xingbei W

Mar 09, 2020

Although I have learned q learning and td, this course still give me a lot of new feeling and understanding on it.

by Stewart A

Sep 03, 2019

Great course! Lots of hands-on RL algorithms. I'm looking forward to the next course in the specialization.

by Martin P

May 30, 2020

A very interesting topic presented in an easy to consume form. It was fun learning with this course.

by Han-June K

Apr 07, 2020

The course is spectacular! I've learned countless material on Reinforcement learning! Thank you!

by Roberto M

Mar 28, 2020

The course is well organized and teachers provide a lot of examples to facilitate comprehension.

by Wang G

Oct 19, 2019

Very Nice Explanation and Assignment! Look forward the next 2 courses in this specialization!

by Sodagreenmario

Sep 18, 2019

Great course, but there are still some little bugs that can be fixed in notebook assignments.

by Chris D

Apr 18, 2020

Very good. Minor issues with inconsistency between parameter naming in different exercises.