Linkedin Learning – Using Apache Spark with DotNET

Linkedin Learning – Using Apache Spark with DotNET-XQZT
English | Size: 458.76 MB
Category: Tuitorial


This course offers a solid introduction to .NET for Apache Spark and how it brings the world of big data analytics to the .NET

Mograph – C4D to Spark AR Crash-Course

Mograph – C4D to Spark AR Crash-Course
English | Size: 366.1MB
Category: Tutorial


What will I learn?
Don will show you how to take a basic, animated model in Cinema 4D, export it in the correct format, import it into Spark AR, make sure it’s animating and placed correctly, apply a texture, add an environment, and upload for approval. This course is not about the complicated inner-workings of Spark AR, but rather the workflow between programs. It will help kick-start your understanding of the process so you can begin your AR journey. Subsequent mini-courses will cover other various topics in the world of Spark AR.

PluralSight – Handling Streaming Data with Azure Databricks Using Spark Structured Streaming

PluralSight – Handling Streaming Data with Azure Databricks Using Spark Structured Streaming Bookware-KNiSO
English | Size: 235.66 MB
Category: Tutorial


In this course, you will deep-dive into Spark Structured Streaming, see its features in action, and use it to build end-to-end, complex & reliable streaming pipelines using PySpark. And you will be using Azure Databricks platform to build & run them

PluralSight – Apache Spark Fundamentals

PluralSight – Apache Spark Fundamentals-REBAR
English | Size: 556.04 MB
Category: Tutorial

Our ever-connected world is creating data faster than Moore’s law can keep up, making it so that we have to be smarter in our decisions on how to analyze it. Previously, we had Hadoop’s MapReduce framework for batch processing, but modern big data processing demands have outgrown this framework. That’s where Apache Spark steps in, boasting speeds 10-100x faster than Hadoop and setting the world record in large scale sorting. Spark’s general abstraction means it can expand beyond simple batch processing, making it capable of such things as blazing-fast, iterative algorithms and exactly once streaming semantics. In this course, you’ll learn Spark from the ground up, starting with its history before creating a Wikipedia analysis application as one of the means for learning a wide scope of its core API. That core knowledge will make it easier to look into Spark’s other libraries, such as the streaming and SQL APIs. Finally, you’ll learn how to avoid a few commonly encountered rough edges of Spark. You will leave this course with a tool belt capable of creating your own performance-maximized Spark application.