Seminar Announcement

Principled Methods for Learning and Understanding of Neural Networks

  • Speaker: Dr. Furong Huang
  • University of Maryland, Department of Computer Science
  • Date: Friday, October 26, 2018
  • Time: 1:00pm - 2:00pm
  • Location: Room T3 (NVC)


Deep neural networks, deriving rich features using compositions of nonlinear layers, have elicited breakthrough successes in machine learning. Despite deep neural networks' success in empirical studies, some foundational issues remain mysterious: (1) Why can we train deep networks given the high non-convex and extremely high-dimensional loss functions? (2) Why do deep networks generalize and how to derive non-vacuous generalization bounds? (3) How to design robust networks to defend against adversarial perturbations? To address the first learning question, we developed a theoretical justification of why deep residual networks are easier to optimize than non-residual ones and provided the first exponentially decaying error bound for deep ResNet using a new algorithm called BoostResNet. BoostResNet trains each residual block sequentially, requiring that each layer provides a better-than-a-weakbaseline oracle in predicting labels. To understand the generalization ability of deep neural networks, we introduce an efficient mechanism, reshaped tensor decomposition, to compress neural networks by exploiting three types of invariant structures: periodicity, modulation and low rank. Our method exploits such invariance structures using a technique called tensorization (reshaping the layers into higher-order tensors) combined with higher order tensor decompositions on top of the tensorized layers. Our compression method improves low rank approximation methods and can be incorporated to most of the existing compression methods for neural networks to achieve better compression. Based on the compression, we derived non-vacuous and data-aware generalization bounds for deep neural networks. Lastly, I will introduce a concept called certified defense against worst possible adversarial examples. The difficulties involved will be discussed..

Speaker's Biography

Dr. Furong Huang is an assistant professor at University of Maryland, Computer Science department. She obtained her BS degree from Zhejiang University and PhD degree from University of California, Irvine. Dr. Huang's research focuses on machine learning, high-dimensional statistics and distributed algorithms. She has made significant contributions in non-convex optimization for spectral methods (matrix and tensor decomposition) and learning latent variable graphical models on distributed systems with largescale data. Her work is characterized by the development and application of sophisticated novel numerical methods to diverse problems in social networks, natural language processing, image processing, and computational biology. She was a keynote speaker at the Tensor Workshop at ICCV'17, and organized the Matrix Factorization Workshop at the 2017 Heidelberg Laureate Forum. Furong was a recipient of the Adobe Research Award 2017.