Processing big data in real-time is challenging due to scalability, information consistency, and fault tolerance. This Big Data Processing with Apache Spark course shows you how you can use Spark to make your overall analysis workflow faster and more efficient. You’ll learn all about the core concepts and tools within the Spark ecosystem, like Spark Streaming, the Spark Streaming API, machine learning extension, and structured streaming.
You’ll begin by learning data processing fundamentals using Resilient Distributed Datasets (RDDs), SQL, Datasets, and Dataframes APIs. After grasping these fundamentals, you’ll move on to using Spark Streaming APIs to consume data in real-time from TCP sockets, and integrate Amazon Web Services (AWS) for stream consumption.
By the end of this course, you’ll not only have understood how to use machine learning extensions and structured streams but you’ll also be able to apply Spark in your own upcoming big data projects.