Now throughout the majority of this course so far we focused exclusively on using the BigQuery web UI for data analysis. Now I want to introduce you to a new and powerful tool in the toolkit of every data scientist which is the online collaborative notebook. For those of you who have heard of Jupyter or IPython notebooks before a lot of this content will look familiar. And that's because Cloud Datalab is an open source free Google Cloud platform tool that shares a lot of the same roots. Now, let's discuss a little bit more about what this tool is and why you should add Cloud Datalab as the next tool to learn in your analysis tool kit. So what is Cloud Datalab? Cloud Datalab is a tool that let's you do exploratory analysis but also unlock the power of the machine in your models advance physical analysis, all directly within one interactive notebook. A you're going to see in our quick demo, the Cloud Datalab notebooks themselves are made up with what we call cells. So cells represent an individual part of that page where we can run arbitrary code. It could be SQL, it could be invoking TensorFlow APIs, it could be building Panda's data frames with Python and R, to manipulate those records and those outputs. But can also do things like see the results immediately right within your notebook, and build visualizations based on those saved results that you stored. So it's highly flexible, and the number one keyword that you'll keep hearing you say, is that it's collaborative. So once you build code, and you build visualizations, and you've added some mark down which is just like some text that you've formatted in a pretty way to help contextually give some guidance for the notebooks for the users that are going to be looking at your work, you can have all of that as part of this notebook package. So here are two examples of what completed notebooks could look like. You see all the way at the top, you have just a bold markdown heading. You have some cells that have some code, so you can automatically see the underlying SQL code that created the bar chart and the line chart that you see beneath that. And these notebooks are meant to be iterative. So what you can actually do is, you can check an entire notebook into version control. And you could have multiple people working on the same type of analysis for experimentation at once. So basically you download a copy of somebody else's latest notebook, begin to look into their SQL or look into their statistical modelling, and then get a feel for how they've actually made their assumptions, their results and their findings and then read along with all the markdown that they've created. And you can basically quickly come up to speed without looking through a bunch of their SQL code on BigQuery. And what you'll note, is just like in BigQuery, these individual cells can be run individually or we can run an entire notebook at once. So what a typical use case that you might see is, one of your data scientist peers has done a lot of work in building this machine learning or statistical model. And they've given the notebook to you to continue forward with that analysis, or to vet their analysis, to see if their assumptions make sense. So you'll download one of their notebooks, or they'll share the repository with you. And what you can do is just like, assuming you have the same underlying data sources that they do, is run the entire notebook and you will be able to see just with a click of a button, you've run now the entire code the exact same results and how those results were actually built up from the ground up. So generally you'll read a notebook from top to bottom. And then you'll see all the pre-process thing and all the different queries in the different models that are invoked to eventually get you those results.