This talk focuses on the statistical sample complexity and model reduction of Markov decision process (MDP). We begin by surveying recent advances on the complexity for solving MDP, without any dimension reduction. In the first part we study the statistical state compression of general Markov processes. We propose a spectral state compression method for learning state features and aggregation structures from data.
Recent years have seen a flurry of activities in designing provably efficient nonconvex procedures for solving statistical estimation problems. The premise is that despite nonconvexity, the loss function may possess benign geometric properties that enable fast global convergence under carefully designed initializations, such as local strong convexity, local restricted convexity, etc.
Soft biomaterials such as human skin have very different mechanical properties from conventional electronics, requiring unusual materials and geometries to match the behavior of the skin. One of the biggest challenges in stretchable electronics is the transfer of power and data signals, with physical wiring easily pulled out or damaged. In my talk, I will be discussing all aspects of creating inductors and power circuits for wireless power transfer to stretchable systems. I will focus on the use of room temperature liquid metals and stretchable magnetic materials to maximize power trans
Recent years have witnessed significant progress in entropy estimation, in particular in the large alphabet regime. Concretely, there exist efficiently computable information theoretically optimal estimators whose performance with n samples is essentially that of the maximum likelihood estimator with n log(n) samples, a phenomenon termed ``effective sample size boosting''. Generalizations to processes with memory (estimation of the entropy rate) and continuous distributions (estimation of the differential entropy) have remained largely open.
The signal processing (SP) landscape has been enriched by recent advances in artificial intelligence (AI) and machine learning (ML), especially since 2010 or so, yielding new tools for signal estimation, classification, prediction, and manipulation. Layered signal representations, nonlinear function approximation, and nonlinear signal prediction are now feasible at very large scale in both dimensionality and data size. These are leading to significant performance gains in a variety of long standing problem domains (e.g., speech, vision), as well as providing the ability to construct new