Chevron Left
Machine Learning: Classification に戻る

ワシントン大学(University of Washington) による Machine Learning: Classification の受講者のレビューおよびフィードバック

4.7
2,960件の評価
487件のレビュー

コースについて

Case Studies: Analyzing Sentiment & Loan Default Prediction In our case study on analyzing sentiment, you will create models that predict a class (positive/negative sentiment) from input features (text of the reviews, user profile information,...). In our second case study for this course, loan default prediction, you will tackle financial data, and predict when a loan is likely to be risky or safe for the bank. These tasks are an examples of classification, one of the most widely used areas of machine learning, with a broad array of applications, including ad targeting, spam detection, medical diagnosis and image classification. In this course, you will create classifiers that provide state-of-the-art performance on a variety of tasks. You will become familiar with the most successful techniques, which are most widely used in practice, including logistic regression, decision trees and boosting. In addition, you will be able to design and implement the underlying algorithms that can learn these models at scale, using stochastic gradient ascent. You will implement these technique on real-world, large-scale machine learning tasks. You will also address significant tasks you will face in real-world applications of ML, including handling missing data and measuring precision and recall to evaluate a classifier. This course is hands-on, action-packed, and full of visualizations and illustrations of how these techniques will behave on real data. We've also included optional content in every module, covering advanced topics for those who want to go even deeper! Learning Objectives: By the end of this course, you will be able to: -Describe the input and output of a classification model. -Tackle both binary and multiclass classification problems. -Implement a logistic regression model for large-scale classification. -Create a non-linear model using decision trees. -Improve the performance of any model using boosting. -Scale your methods with stochastic gradient ascent. -Describe the underlying decision boundaries. -Build a classification model to predict sentiment in a product review dataset. -Analyze financial data to predict loan defaults. -Use techniques for handling missing data. -Evaluate your models using precision-recall metrics. -Implement these techniques in Python (or in the language of your choice, though Python is highly recommended)....

人気のレビュー

SS

Oct 16, 2016

Hats off to the team who put the course together! Prof Guestrin is a great teacher. The course gave me in-depth knowledge regarding classification and the math and intuition behind it. It was fun!

CJ

Jan 25, 2017

Very impressive course, I would recommend taking course 1 and 2 in this specialization first since they skip over some things in this course that they have explained thoroughly in those courses

フィルター:

Machine Learning: Classification: 301 - 325 / 455 レビュー

by Thuong D H

Sep 23, 2016

Good course!

by Suneel M

May 09, 2018

Excellent c

by Do A T

Nov 15, 2017

very useful

by 李今晖

Sep 01, 2016

Good course

by Jan L

Aug 02, 2017

Just great

by 童哲明

Jul 27, 2016

very goog!

by Jair d M F

Apr 21, 2016

Very Good!

by Nidal M G

Dec 04, 2018

very good

by 王曾

Nov 27, 2017

very good

by Muhammad H S

Nov 02, 2016

Excellent

by Joshua C

May 03, 2017

Awesome!

by Roberto E

Mar 01, 2017

awesome!

by Isura N

Dec 28, 2017

Hoooray

by Shashidhar Y

Apr 02, 2019

Nice!!

by Nicholas S

Oct 07, 2016

Great

by 李真

Mar 06, 2016

great

by AMARTHALURU N K

Nov 24, 2019

good

by RISHI P M

Aug 19, 2019

Good

by Akash G

Mar 10, 2019

good

by xiaofeng y

Feb 06, 2017

good

by Kumiko K

Jun 05, 2016

Fun!

by Arun K P

Oct 17, 2018

G

by Navinkumar

Feb 23, 2017

g

by MARIANA L J

Aug 12, 2016

The good:

-Good examples to learn the concepts

-Good organization of the material

-The assignments were well-explained and easy to follow-up

-The good humor and attitude of the professor makes the lectures very engaging

-All videolectures are small and this makes them easy to digest and follow (optional videos were large compared with the rest of the lectures but the material covered on those was pretty advanced and its length is justifiable)

Things that can be improved:

-In some of the videos the professor seemed to cruise through some of the concepts. I understand that it is recommended to take the series of courses in certain order but sometimes I felt we were rushing through the material covered

-I may be nitpicking here but I wish the professor used a different color to write on the slides (the red he used clashed horribly with some of the slides' backgrounds and made it difficult to read his observations)

Overall, a good course to take and very easy to follow if taken together with the other courses in the series.

by Hanif S

Jun 02, 2016

Highly recommended course, looking under the hood to examine how popular ML algorithms like decision trees and boosting are actually implemented. I'm surprised at how intuitive the idea of boosting really is. Also interesting that random forests are dismissed as not as powerful as boosting, but I would love to know why! Both methods appear to expose more data to the learner, and a heuristic comparison between RF and boosting would have been greatly appreciated.

One can immediately notice the difference between statistician Emily, who took us through the mathematical derivation of the derivative (ha.ha.) function for linear regression (much appreciated Emily!), and computer scientist Carlos, who skipped this bit for logistic regression but provided lots of verbose code to track the running of algorithms during assignments (helps to see what is actually happening under the hood). Excellent lecturers both, thank you!