Tamara Broderick

Selected Tutorials

Nonparametric Bayesian Statistics

The active version of this tutorial is currently taking place at the Foundations of Machine Learning Bootcamp at the Simons Institute at UC Berkeley.

  • Full information, slides, and code will be found here.

The most recent version of this tutorial with video took place on 2016 May 16 at the Machine Learning Summer School, Cádiz, Spain.

For other versions of this tutorial see the following links:

  • 2017 January 9, 13. MIT Lincoln Labs, USA. Full information, slides, and code can be found here.
  • 2016 June 12. University of Cagliari, Sardinia, Italy. Full information, slides, and code can be found here.
  • 2015 August 17, 18, 19. MIT, Cambridge, MA. Full information, slides, and code can be found here.
  • 2015 July 20, 21, 24. Machine Learning Summer School, Tübingen, Germany. Videos, full information, slides, and code can be found here.

Clusters and features from combinatorial stochastic processes

2012 September 13. Bayesian Nonparametrics, ICERM Semester Program Workshop, Brown University, Providence, Rhode Island, USA.

An introduction to Bayesian cluster models with an unbounded number of potential clusters---as well as a generalization of clustering in which each data point can belong to multiple latent groups. The tutorial places particular emphasis on the following popular nonparametric Bayesian constructions: Chinese restaurant processes, Indian buffet processes, Dirichlet processes, and beta processes. The talk assumes some existing knowledge of Bayes theorem and Bayesian statistics as well as passing knowledge of Markov Chain Monte Carlo algorithms.

Machine learning crash course part II: clustering

2012 August 21. AMP Camp: Big Data Bootcamp, UC Berkeley, California, USA. [abstract]

This talk introduces clustering as a subfield of machine learning with an emphasis on practical usage. We cover the K means algorithm, cluster evaluation, different meanings of clustering, and data pre-processing. A final example illustrates how the ideas of the lecture come together when tackling real data.