Graphons and Machine Learning: Modeling and Estimation of Sparse Networks at Scale
Abstract: There are numerous examples of sparse network at scale, for example the Internet, the WWW, online social networks, and large bipartite networks used for recommendations. How do we model and learn these networks? In the case of relatively small or moderately sized networks, it’s appropriate to model the network parametrically, and attempt to learn these parameters. For networks at scale, a non-parametric representation is more appropriate. In this talk, I first review the theory of graphons, developed over the last 15 years to describe limits of dense graphs, and the more the recent theory describing sparse graphs of unbounded average degree, including power-law graphs. I then show how to use these graphons as non-parametric models for sparse networks. I show how to get consistent estimators of these non-parametric models, and moreover how to do this in a way that protects the privacy of individuals on the network. Finally, I provide algorithms based on these models, for example for recommendation algorithms in sparse bipartite networks such as Netflix.
Jennifer Tour Chayes, Microsoft Research, U.S.
This is one of seven virtual plenary talks originally scheduled for the 2020 SIAM Conference on Mathematics of Data Science. For more information on this session, visit https://meetings.siam.org/sess/dsp_programsess.cfm?SESSIONCODE=69232. To view the virtual program and register for other invited plenary talks, minitutorial talks, and minisymposia, please visit the MDS20 website at https://www.siam.org/conferences/cm/conference/mds20.