Use Scala and Spark for data analysis, machine learning and analytics.
Get your data to fly using Spark and Scala for analytics, machine learning and data science.
If you are an analyst or a data scientist, you’re used to having multiple systems for working with data. SQL, Python, R, Java, etc. With Spark, you have a single engine where you can explore and play with large amounts of data, run machine learning algorithms and then use the same system to productionize your code.
Scala is a general purpose programming language – like Java or C++. It’s functional programming nature and the availability of a REPL environment make it particularly suited for a distributed computing framework like Spark.
Using Spark and Scala you can analyze and explore your data in an interactive environment with fast feedback. The course will show how to leverage the power of RDDs and Dataframes to manipulate data with ease.
- Use Spark for a variety of analytics and Machine Learning tasks
- Understand functional programming constructs in Scala
- Implement complex algorithms like PageRank or Music Recommendations
- Work with a variety of datasets from Airline delays to Twitter, Web graphs, Social networks and Product Ratings
- Use all the different features and libraries of Spark : RDDs, Dataframes, Spark SQL, MLlib, Spark Streaming and GraphX
- Write code in Scala REPL environments and build Scala applications with an IDE
All examples work with or without Hadoop. If you would like to use Spark with Hadoop, you’ll need to have Hadoop installed (either in pseudo-distributed or cluster mode).
The course assumes experience with one of the popular object-oriented programming languages like Java/C++.
Who is this course intended for?
Engineers who want to use a distributed computing engine for batch or stream processing or both
Analysts who want to leverage Spark for analyzing interesting datasets
Data Scientists who want a single engine for analyzing and modelling data as well as productionizing it.
Loonycorn is us, Janani Ravi and Vitthal Srinivasan. Between us, we have studied at Stanford, been admitted to IIM Ahmedabad and have spent years working in tech, in the Bay Area, New York, Singapore and Bangalore.
Janani: 7 years at Google (New York, Singapore); Studied at Stanford; also worked at Flipkart and Microsoft
Vitthal: Also Google (Singapore) and studied at Stanford; Flipkart, Credit Suisse and INSEAD too
We think we might have hit upon a neat way of teaching complicated tech courses in a funny, practical, engaging way, which is why we are so excited to be here on Learnsector!
We hope you will try our offerings and think you’ll like them 🙂
|Introduction to Scala||00:00:00|
|Installing Scala and Hello World||00:00:00|
|Introduction to Spark|
|What does Donald Rumsfeld have to do with data analysis?||00:00:00|
|Why is Spark so cool?||00:00:00|
|An introduction to RDDs – Resilient Distributed Datasets||00:00:00|
|Built-in libraries for Spark||00:00:00|
|The Spark Shell||00:00:00|
|See it in Action : Munging Airlines Data with Spark||00:00:00|
|Transformations and Actions||00:00:00|
|Resilient Distributed Datasets|
|RDD Characteristics: Partitions and Immutability||00:00:00|
|RDD Characteristics: Lineage, RDDs know where they came from||00:00:00|
|What can you do with RDDs?||00:00:00|
|Create your first RDD from a file||00:00:00|
|Average distance travelled by a flight using map() and reduce() operations||00:00:00|
|Get delayed flights using filter(), cache data using persist()||00:00:00|
|Average flight delay in one-step using aggregate()||00:00:00|
|Frequency histogram of delays using countByValue()||00:00:00|
|Advanced RDDs: Pair Resilient Distributed Datasets|
|Special Transformations and Actions||00:00:00|
|Average delay per airport, use reduceByKey(), mapValues() and join()||00:00:00|
|Average delay per airport in one step using combineByKey()||00:00:00|
|Get the top airports by delay using sortBy()||00:00:00|
|Lookup airport descriptions using lookup(), collectAsMap(), broadcast()||00:00:00|
|Advanced Spark: Accumulators, Spark Submit, MapReduce , Behind The Scenes|
|Get information from individual processing nodes using accumulators||00:00:00|
|Long running programs using spark-submit||00:00:00|
|Spark-Submit with Scala – A demo||00:00:00|
|Behind the scenes: What happens when a Spark script runs?||00:00:00|
|Running MapReduce operations||00:00:00|
|PageRank: Ranking Search Results|
|What is PageRank?||00:00:00|
|The PageRank algorithm||00:00:00|
|Implement PageRank in Spark||00:00:00|
|Join optimization in PageRank using Custom Partitioning||00:00:00|
|Dataframes: RDDs + Tables||00:00:00|
|MLlib in Spark: Build a recommendations engine|
|Collaborative filtering algorithms||00:00:00|
|Latent Factor Analysis with the Alternating Least Squares method||00:00:00|
|Music recommendations using the Audioscrobbler dataset||00:00:00|
|Implement code in Spark using MLlib||00:00:00|
|Introduction to streaming||00:00:00|
|Implement stream processing in Spark using Dstreams||00:00:00|
|Stateful transformations using sliding windows||00:00:00|
|The Marvel social network using Graphs||00:00:00|
|Scala Language Primer|
|Scala – A “better Java”?||00:00:00|
|How do Classes work in Scala?||00:00:00|
|Classes in Scala – continued||00:00:00|
|Functions are different from Methods||00:00:00|
|Collections in Scala||00:00:00|
|Map, Flatmap – The Functional way of looping||00:00:00|
|First Class Functions revisited||00:00:00|
|Partially Applied Functions||00:00:00|
|[For Linux/Mac OS Shell Newbies] Path and other Environment Variables||00:00:00|
No Reviews found for this course.