The images in the horses are humans dataset are all 300 by 300 pixels. So we had quite a few convolutional layers to reduce the images down to condensed features. Now, this of course can slow down the training. So let's take a look at what would happen if we change it to a 150 by a 150 for the images to have a quarter of the overall data and to see what the impact would be. We'll start as before by downloading and unzipping the training and test sets. Then we'll point some variables in the training and test sets before setting up the model. First, we'll import TensorFlow and now we'll define the layers for the model. Note that we've changed the input shape to be 150 by 150, and we've removed the fourth and fifth convolutional max pool combinations. Our model summary now shows the layer starting with the 148 by 148, that was the result of convolving the 150 by 150. We'll see that at the end, we end up with a 17 by 17 by the time we're through all of the convolutions and pooling. We'll compile our model as before, and we'll create our generators as before, but note that the target size has now changed to 150 by 150. Now we can begin the training, and we can see that after the first epoch that the training is fast, and accuracy and validation aren't too bad either. The training continues and both accuracy values will tick up. Often, you'll see accuracy values that are really high like 1.000, which is likely a sign that you're overfitting. We reach the end, I have really high accuracy on the test data, about 0.99, which is much too high. The validation set is about 0.84, which is pretty good, but let's put it to the test with some real images. Let's start with this image of the girl and the horse. It still classifies as a horse. Next, let's take a look at this cool horsey, and who's still correctly categorized. These cuties are also correctly categorized, but this one is still wrongly categorized. But the most interesting I think is this woman. When we use 300 by 300 before and more convolutions, she was correctly classified. But now, she isn't. This is a great example of the importance of measuring your training data against a large validation set, inspecting where it got it wrong and seeing what you can do to fix it. Using this smaller set is much cheaper to train, but then errors like this woman with her back turned and her legs obscured by the dress will happen, because we don't have that data in the training set. That's a nice hint about how to edit your dataset for the best effect in training.