Now that you have all the pieces, let's start looking at how to put them together into data processing infrastructure. There are a few common assemblies of services and technologies. You'll see them used repeatedly in different context. Here's an example showing the many manual ingests solutions available. You can use the gsutil command line tool to load files into cloud storage. But if you want to load data into BigQuery, you need to be able to identify the structure. The BigQuery command line tool, bq, is good for uploading large data files and for scheduling data file uploads. You can use the command to create tables, define schemas, load data, and run queries. The BQ command is available on a Compute Engine instance in Cloud Shell or you can install it on any client machine as part of the Google Cloud Software Development Kit, or SDK. You can load data into BigQuery from the GCP console. You can stream data using Cloud Dataflow, and from Cloud Logging, or you can use POST calls from a program. And it's very convenient that BigQuery can automatically detect CSV and JSON format files. Another tip, think about data in terms of the three vs, volume, velocity, and variety. How much, how often, and how consistent? This will guide you to the best approach for ingesting the data. In brief, use gsutil for uploading files. Use the Storage Transfer Service when the data is in another location, such as another cloud. And use the Transfer Appliance when the data's too big to transfer electronically.