Chevron Left
Machine Learning: Classification に戻る

ワシントン大学(University of Washington) による Machine Learning: Classification の受講者のレビューおよびフィードバック

4.7
3,534件の評価
587件のレビュー

コースについて

Case Studies: Analyzing Sentiment & Loan Default Prediction In our case study on analyzing sentiment, you will create models that predict a class (positive/negative sentiment) from input features (text of the reviews, user profile information,...). In our second case study for this course, loan default prediction, you will tackle financial data, and predict when a loan is likely to be risky or safe for the bank. These tasks are an examples of classification, one of the most widely used areas of machine learning, with a broad array of applications, including ad targeting, spam detection, medical diagnosis and image classification. In this course, you will create classifiers that provide state-of-the-art performance on a variety of tasks. You will become familiar with the most successful techniques, which are most widely used in practice, including logistic regression, decision trees and boosting. In addition, you will be able to design and implement the underlying algorithms that can learn these models at scale, using stochastic gradient ascent. You will implement these technique on real-world, large-scale machine learning tasks. You will also address significant tasks you will face in real-world applications of ML, including handling missing data and measuring precision and recall to evaluate a classifier. This course is hands-on, action-packed, and full of visualizations and illustrations of how these techniques will behave on real data. We've also included optional content in every module, covering advanced topics for those who want to go even deeper! Learning Objectives: By the end of this course, you will be able to: -Describe the input and output of a classification model. -Tackle both binary and multiclass classification problems. -Implement a logistic regression model for large-scale classification. -Create a non-linear model using decision trees. -Improve the performance of any model using boosting. -Scale your methods with stochastic gradient ascent. -Describe the underlying decision boundaries. -Build a classification model to predict sentiment in a product review dataset. -Analyze financial data to predict loan defaults. -Use techniques for handling missing data. -Evaluate your models using precision-recall metrics. -Implement these techniques in Python (or in the language of your choice, though Python is highly recommended)....

人気のレビュー

SM
2020年6月14日

A very deep and comprehensive course for learning some of the core fundamentals of Machine Learning. Can get a bit frustrating at times because of numerous assignments :P but a fun thing overall :)

SS
2016年10月15日

Hats off to the team who put the course together! Prof Guestrin is a great teacher. The course gave me in-depth knowledge regarding classification and the math and intuition behind it. It was fun!

フィルター:

Machine Learning: Classification: 376 - 400 / 555 レビュー

by Muhammad H S

2016年11月2日

Excellent

by Joshua C

2017年5月3日

Awesome!

by Roberto E

2017年3月1日

awesome!

by Isura N

2017年12月28日

Hoooray

by Anshumaan K P

2020年11月11日

NYC ;)

by Shashidhar Y

2019年4月2日

Nice!!

by Md. T U B

2020年9月2日

great

by Subhadip P

2020年8月4日

great

by Nicholas S

2016年10月7日

Great

by 李真

2016年3月5日

great

by SAYANTAN N

2021年1月28日

good

by boulealam c

2020年12月15日

good

by Saurabh A

2020年9月11日

good

by SUJAY P

2020年8月21日

nice

by ANKAN M

2020年8月16日

nice

by Sadhiq A

2020年6月19日

good

by AMARTHALURU N K

2019年11月24日

good

by RISHI P M

2019年8月19日

Good

by Akash G

2019年3月10日

good

by xiaofeng y

2017年2月5日

good

by Kumiko K

2016年6月5日

Fun!

by Arun K P

2018年10月17日

G

by Navinkumar

2017年2月23日

g

by MARIANA L J

2016年8月12日

The good:

-Good examples to learn the concepts

-Good organization of the material

-The assignments were well-explained and easy to follow-up

-The good humor and attitude of the professor makes the lectures very engaging

-All videolectures are small and this makes them easy to digest and follow (optional videos were large compared with the rest of the lectures but the material covered on those was pretty advanced and its length is justifiable)

Things that can be improved:

-In some of the videos the professor seemed to cruise through some of the concepts. I understand that it is recommended to take the series of courses in certain order but sometimes I felt we were rushing through the material covered

-I may be nitpicking here but I wish the professor used a different color to write on the slides (the red he used clashed horribly with some of the slides' backgrounds and made it difficult to read his observations)

Overall, a good course to take and very easy to follow if taken together with the other courses in the series.

by Hanif S

2016年6月2日

Highly recommended course, looking under the hood to examine how popular ML algorithms like decision trees and boosting are actually implemented. I'm surprised at how intuitive the idea of boosting really is. Also interesting that random forests are dismissed as not as powerful as boosting, but I would love to know why! Both methods appear to expose more data to the learner, and a heuristic comparison between RF and boosting would have been greatly appreciated.

One can immediately notice the difference between statistician Emily, who took us through the mathematical derivation of the derivative (ha.ha.) function for linear regression (much appreciated Emily!), and computer scientist Carlos, who skipped this bit for logistic regression but provided lots of verbose code to track the running of algorithms during assignments (helps to see what is actually happening under the hood). Excellent lecturers both, thank you!