Research

Broadly speaking, I’m interested in the study of systems which demonstrate linked behavior across multiple (temporal, spatial, other) scales. More specifically, some of my past work has involved multiscale model reduction as applied to neural networks, as well as computing distances between graphs. Much of this work is in the intersection of machine learning and spectral graph theory. Below is more specific information about some of my projects. Section headers are links to pages with more details and source code.

A student project that used a variety of machine learning approaches to classify whether tweets about COVID-19 and 5G were fake news or not.

Graph Diffusion Distance, or GDD, is a measure of graph similarity based on a comparison of the Laplacian Exponential Kernel. Examining its properties and figuring out how to compute it efficiently was the focus of my PhD at University of California, Irvine.

GPCNs are a type of machine learning model that learns to approximate the input dataset at multiple spatial scales. It accomplishes this via the use of optimized prolongation/restriction maps, which coarsen and refine the input. The resulting machine learning model is very similar to U-Net (a famous image convolution model) but for arbitrary graphs, rather than grids.