Manipulating big data distributed over a cluster using functional concepts is rampant in industry, and is arguably one of the first widespread industrial uses of functional ideas. This is evidenced by the popularity of MapReduce and Hadoop, and most recently Apache Spark, a fast, in-memory distributed collections framework written in Scala. In this course, we'll see how the data parallel paradigm can be extended to the distributed case, using Spark throughout. We'll cover Spark's programming model in detail, being careful to understand how and when it differs from familiar programming models, like shared-memory parallel collections or sequential Scala collections. Through hands-on examples in Spark and Scala, we'll learn when important issues related to distribution like latency and network communication should be considered and how they can be addressed effectively for improved performance.
提供:


このコースについて
受講生の就業成果
11%
15%
12%
習得するスキル
受講生の就業成果
11%
15%
12%
提供:

スイス連邦工科大学ローザンヌ校(École Polytechnique Fédérale de Lausanne)
シラバス - 本コースの学習内容
Getting Started + Spark Basics
Get up and running with Scala on your computer. Complete an example assignment to familiarize yourself with our unique way of submitting assignments. In this week, we'll bridge the gap between data parallelism in the shared memory scenario (learned in the Parallel Programming course, prerequisite) and the distributed scenario. We'll look at important concerns that arise in distributed systems, like latency and failure. We'll go on to cover the basics of Spark, a functionally-oriented framework for big data processing in Scala. We'll end the first week by exercising what we learned about Spark by immediately getting our hands dirty analyzing a real-world data set.
Reduction Operations & Distributed Key-Value Pairs
This week, we'll look at a special kind of RDD called pair RDDs. With this specialized kind of RDD in hand, we'll cover essential operations on large data sets, such as reductions and joins.
Partitioning and Shuffling
This week we'll look at some of the performance implications of using operations like joins. Is it possible to get the same result without having to pay for the overhead of moving data over the network? We'll answer this question by delving into how we can partition our data to achieve better data locality, in turn optimizing some of our Spark jobs.
Structured data: SQL, Dataframes, and Datasets
With our newfound understanding of the cost of data movement in a Spark job, and some experience optimizing jobs for data locality last week, this week we'll focus on how we can more easily achieve similar optimizations. Can structured data help us? We'll look at Spark SQL and its powerful optimizer which uses structure to apply impressive optimizations. We'll move on to cover DataFrames and Datasets, which give us a way to mix RDDs with the powerful automatic optimizations behind Spark SQL.
レビュー
BIG DATA ANALYSIS WITH SCALA AND SPARK からの人気レビュー
The sessions where clearly explained and focused. Some of the exercises contained slightly confusing hints and information, but I'm sure those mistakes will be ironed out in future iterations. Thanks!
Excellent overview of Spark, including exercises that solidify what you learn during the lectures. The development environment setup tutorials were also very helpful, as I had not yet worked with sbt.
Great introduction to spark. Fun assignments. Since it was the first ever session, there were quite a few kinks with the assignments. But the discussion forums rescued me any time I was stuck.
Very Nice and effective course. One of the best course i have done on Spark online. Many Thanks to the course instructor Heather Miller for creating a very detail and updated course on Spark.
Functional Programming in Scala専門講座について
Discover how to write elegant code that works the first time it is run.

よくある質問
いつ講座や課題にアクセスできるようになりますか?
この専門講座をサブスクライブすると何を行うことができるようになりますか?
Is financial aid available?
コースを修了することで大学の単位は付与されますか?
さらに質問がある場合は、受講者向けヘルプセンターにアクセスしてください。