Machine Learning II

Expand on your knowledge of machine learning techniques by examining clustering methods, matrix factorization, and sequential models.

Modules/Weeks

6

Weekly Effort

8-10 hours

Format

Cost

$199.00

Course Description

  • Delve deeper into supervised and unsupervised machine learning techniques.
  • Focus on clustering methods, matrix factorization, and sequential models.
  • Cover topics such as boosting, K-means clustering, mixture models, and hidden Markov models.
  • Gain advanced understanding and application of machine learning techniques in real-world contexts.

Course Prerequisites

  • Calculus, linear algebra, probability, and statistical concepts
  • Proficiency in coding
  • Comfort with data manipulation

What You Will Learn

By the end of this course, learners will be able to:

 

  • Develop an in-depth understanding of advanced machine learning techniques, including boosting, clustering algorithms, and matrix factorization.

  • Master probabilistic and non-probabilistic approaches to unsupervised learning.

  • Apply sequential models, such as Markov models and hidden Markov models, and gain knowledge of association analysis and model selection.

  • Utilize their knowledge and skills in solving complex problems through coding projects and quizzes, gaining a deeper understanding of machine learning techniques and their practical applications.

 

Course Outline

 

Module 1: Boosting and unsupervised learning

Module 2: Unsupervised learning

Module 3: Extended unsupervised learning

Module 4: PCA and Markov models

Module 5: HMM and state-space models

Module 6: Association rules and model selection

Instructors

Headshot of Professor John Paisley
John Paisley
Associate Professor of Electrical Engineering

John Paisley joined the Department of Electrical Engineering at Columbia University in Fall 2013 and is an affiliated faculty member of the Data Science Institute at Columbia University. He received the B.S., M.S. and Ph.D. degrees in Electrical and Computer Engineering from Duke University in 2004, 2007 and 2010. He was then a postdoctoral researcher in the Computer Science departments at Princeton University and UC Berkeley, where he worked on developing probabilistic models for large-scale text and image processing applications. He is particularly interested in developing Bayesian models and posterior inference techniques that address the Big Data problem, with applications to data analysis and exploration, recommendation systems, information retrieval, and compressed sensing.

Please note that there are no instructors or course assistants actively monitoring this course.

Subscribe for Updates

CAPTCHA