This is where all the excitement is today. Everybody's talking about deep learning. Every time you turn around, there's an article about deep learning and I just wanted to talk briefly about it. The way to think about deep learning is, I don't know if it's synonymous when a computer scientist says deep learning if they're automatically implying neural networks, but I think there's a strong correlation between neural networks and deep learning at least from my perspective. So, one of these little boxes here could represent linear regression learning algorithm. This one might be K-means. Okay. They don't have to be the same. They can be the same, but they don't have to. But think of this box as one little learning algorithm and maybe it makes sense that they're all the same, perhaps. But the idea behind a neural network is that there's an input layer where it takes. These are the inputs, and then there's a number of hidden layers, and then an output, a final output layer that makes the prediction. The key is that they have layers. There's some really interesting stuff to go look at. If you go to a numenta, you'll see they're all over sparse distributed representations and neuromorphic computing. At least they were the last time I went there. There's another article I found here on neuromemresistive processor breaks, these boundaries and machine learning. There's a whole ton of information on deep learning.net. Google is huge in this. Go to their brain site there. We're going to watch a short video from Chief Research Officer. This was a few years ago, and he's still there Rick Rashid, where he's speaking English, and then they translated into Chinese text, which then gets translated into Chinese speech. They're using deep learning to do that, and this material is beyond the scope of this course this. I just wanted to wet your appetite for machine learning. See some examples of what could be done can be done.