Analysing Unstructured Data using MongoDB and PySpark

Coursera Project Network

Learn how to connect MongoDB database with PySpark

Learn how to analyse unstructured dataset stored in MongoDB

Learn how to write Spark DataFrames to CSV or MongoDB

Clock1.5 hours
Comment Dots英語

By the end of this project, you will learn how to analyze unstructured data stored in MongoDB using PySpark. We will be using an open source dataset containing information on movies released around the world. I will teach you how to connect a MongoDB database with PySpark, how to analyze unstructured dataset stored in MongoDB, and how to write the analyses results to a CSV file or back to MongoDB. I will also teach you how to access inner (or nested) documents and how to run SQL queries on a MongoDB collection. You will create a ready-to-use Jupyter notebook for conducting analyses on MongoDB collections using PySpark. After completing the project, you will receive a Zip file containing links to other open source datasets for additional practice! MongoDB is one of the most commonly used databases for storing unstructured datasets. As the size of the dataset grows, it is becoming more practical to use Spark’s analytical engine for analyses. These analyses could range from basic descriptive statistics metrics to more advanced machine learning and deep learning capabilities, all utilizing the vast library of Spark. This is a beginner level course where we will cover the basics of MongoDB and PySpark. Note: This course works best for learners who are based in the North America region. We’re currently working on providing the same experience in other regions.


Unstructured DataBig DataMongodbPySpark



  1. Upload data to MongoDB Database

  2. Connect to MongoDB using PySpark

  3. Analyse MongoDB collection and access nested documents

  4. Write Spark Dataframe to CSV

  5. Run SQL query on MongoDB collection

  6. Write Spark Dataframe to MongoDB