Hyunjik Kim

Preprints | Publications

I'm a third year PhD student in machine learning at the University of Oxford, supervised by Prof. Yee Whye Teh in the Machine Learning group at the Department of Statistics. I also spend two days a week at DeepMind as a research scientist.

My research interests fall under the topic of scalable probabilistic inference. So far I have worked on scaling up inference for Gaussian processes, in particular on regression models for collaborative filtering that are motivated by a scalable approximation to a GP, as well as a method for scaling up the compositional kernel search used by the Automatic Statistician via variational sparse GP methods. I have recently developed an interest in deep generative models, in particular latent variable models whose latent variables are interpretable, for example representing disentangled factors of variation in the data. I am also interested in gradient based inference for generative models with discrete units, which ties in closely with interpretability.

Previously, I studied Mathematics at the University of Cambridge, from which I obtained B.A. and M.Math. degrees. I spent a summer at Microsoft Research, Cambridge as a research intern, and worked on collaborative filtering. I spent last summer interning at DeepMind working on unsupervised learning of disentangled representations.

Curriculum Vitae

E-mail: hkim@stats.ox.ac.uk

Recent

Public Engagement: Introducing Machine Learning to the Public

I helped create a cute two-minute animation that introduces machine learning to the general public, along with friends at Oxford. Check it out below!



Further details can be found here

Preprints

Collaborative Filtering with Side Information: a Gaussian Process Perspective

Abstract: We tackle the problem of collaborative filtering (CF) with side information, through the lens of Gaussian Process (GP) regression. Driven by the idea of using the kernel to explicitly model user-item similarities, we formulate the GP in a way that allows the incorporation of low-rank matrix factorisation, arriving at our model, the Tucker Gaussian Process (TGP). Consequently, TGP generalises classical Bayesian matrix factorisation models, and goes beyond them to give a natural and elegant method for incorporating side information, giving enhanced predictive performance for CF problems. Moreover we show that it is a novel model for regression, especially well-suited to grid-structured data and problems where the dependence on covariates is close to being separable.

Hyunjik Kim, Xiaoyu Lu, Seth Flaxman, Yee Whye Teh
ArXiv, 2016
pdf | bibtex

Publications

Scaling up the Automatic Statistician: Scalable Structure Discovery for Regression using Gaussian Processes

Abstract: Automating statistical modelling is a challenging problem that has far-reaching implications for artificial intelligence. The Automatic Statistician employs a kernel search algorithm to provide a first step in this direction for regression problems. However this does not scale due to its O(N^3) running time for the model selection. This is undesirable not only because the average size of data sets is growing fast, but also because there is potentially more information in bigger data, implying a greater need for more expressive models that can discover finer structure. We propose Scalable Kernel Composition (SKC), a scalable kernel search algorithm, to encompass big data within the boundaries of automated statistical modelling.

Hyunjik Kim, Yee Whye Teh
ArXiv, 2017
pdf | bibtex
AutoML 2016, Journal of Machine Learning Research Workshop and Conference Proceedings.
Practical Bayesian Nonparametrics Workshop, NIPS 2016. Oral & Travel Award.
pdf