Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

May 21: Machine learning applications for hospitals

Date: May 21 at 3pm

Speaker: David Scheinker

Abstract: Academic hospitals and particle accelerators have a lot in common. Both are complex organizations; employ numerous staff and scientists; deliver a variety of services; research how to improve the delivery of those services; and do it all with a variety of large expensive machines. My group focuses on helping the Stanford hospitals, mostly the Children's Hospital, seek to improve: throughput, decision-support, resource management, innovation, and education. I'll present brief overviews of a variety of ML-based approaches to projects in each of these areas. For example, integer programming to optimize surgical scheduling and Neural Networks to interpret continuous-time waveform monitor data. I will conclude with a broader vision for how modern analytics methodology could potentially transform healthcare delivery. More information on the projects to be discussed is available at surf.stanford.edu/projects

 

June 4: Rapid Gaussian Process Training via Structured Low-Rank Kernel Approximation of Gridded Measurements

Date: June 4, 3pm

Speaker: Franklin Fuller

Abstract: The cubic scaling of matrix inversion with the number of data points is the main computational cost in Gaussian Process (GP) regression. Sparse GP approaches reduce the complexity of matrix inversion to linear complexity by making an optimized low rank approximation to the kernel, but the quality of the approximation depends (and scales with) how many "inducing" or representative points are allowed. When the problem at hand allows the kernel to be decomposed into a kronecker product of lower dimensional kernels, many more inducing points can be feasibly processed by exploiting the kronecker factorization, resulting in a much higher quality fit. Kronecker factorizations suffer from exponentially scaling with the dimension of the input, however, which has limited this approach to problems of only a few input dimensions. It was recently shown how this problem can be circumvented by making an additional low-rank approximation across input dimensions, resulting in an approach that scales linearly in both data points and the input dimensionality. We explore a special case of this recent work wherein the observed data are measured on a complete multi-dimensional grid (not necessarily uniformly spaced), which is a is very common scenario in scientific measurement environments. In this special case, the problem decomposes over axes of the input grid, making the cost linearly scale mainly with the largest axis of the grid. We apply this approach to deconvolve linearly mixed spectroscopic signals and are able to optimize kernel hyper parameters on datasets containing billions of measurements in minutes with a laptop.

 

TBD: Machine learning at LLNL (tentative)

Date: June 4, 3pm

Speaker: Brian Spears (LLNL)

Abstract: TBD

 

TBD: On analyzing urban form at global scale with remote sensing data and generative adversarial networks

Date: TBD

Speaker: Adrian Albert

Abstract: Current analyses of urban development use either simple, bottom-up models, that have limited predictive performance, or highly engineered, complex models relying on many sources of survey data that are typically scarce and difficult and expensive to collect. This talk presents work-in-progress developing a data-driven, flexible, non-parametric framework to simulate realistic urban forms using generative adversarial networks and planetary-level remote-sensing data. To train our urban simulator, we  curate and put forth a new dataset on urban form, integrating spatial distribution maps of population, nighttime luminosity, and built land densities, as well as best-available information on city administrative boundaries for 30,000 of the world's largest cities. This is the first analysis to date of urban form using modern generative models and remote-sensing data.

...