Large streams of data, mostly unlabeled.

Machine learning approach to fit models to data. How does it work? Take the raw data, hypothesize a model, use a learning algorithm to get the model parameters to match the data.

What makes a good machine learning algorithm?

- Performance guarantees: $$\theta \approx \theta^*$$ (statistical consistency and finite sample bounds)
- Real-world sensors, data, resources (high-dimensional, large-scale, …)

For many types of dynamical systems, learning is provably intractable. You must choose the right class of model, or else all bets are off!

Look into:

- Spectral Learning approaches to machine learning