Spark SQL and PySpark 3 using Python 3 (Formerly CCA175) | Udemy


Spark SQL and PySpark 3 using Python 3 (Formerly CCA175) | Udemy
English | Size: 9.35 GB
Genre: eLearning

What you’ll learn
All the HDFS Commands that are relevant to validate files and folders in HDFS.
Quick recap of Python which is relevant to learn Spark
Ability to use Spark SQL to solve the problems using SQL style syntax.
Pyspark Dataframe APIs to solve the problems using Dataframe style APIs.
Relevance of Spark Metastore to convert Dataframs into Temporary Views so that one can process data in Dataframes using Spark SQL.
Apache Spark Application Development Life Cycle
Apache Spark Application Execution Life Cycle and Spark UI
Setup SSH Proxy to access Spark Application logs
Deployment Modes of Spark Applications (Cluster and Client)
Passing Application Properties Files and External Dependencies while running Spark Applications

As part of this course, you will learn all the key skills to build Data Engineering Pipelines using Spark SQL and Spark Data Frame APIs using Python as a Programming language. This course used to be a CCA 175 Spark and Hadoop Developer course for the preparation of the Certification Exam. As of 10/31/2021, the exam is sunset and we have renamed it to Apache Spark 2 and 3 using Python 3 as it covers industry-relevant topics beyond the scope of certification.

About Data Engineering

Data Engineering is nothing but processing the data depending upon our downstream needs. We need to build different pipelines such as Batch Pipelines, Streaming Pipelines, etc as part of Data Engineering. All roles related to Data Processing are consolidated under Data Engineering. Conventionally, they are known as ETL Development, Data Warehouse Development, etc. Apache Spark is evolved as a leading technology to take care of Data Engineering at scale.

I have prepared this course for anyone who would like to transition into a Data Engineer role using Pyspark (Python + Spark). I myself am a proven Data Engineering Solution Architect with proven experience in designing solutions using Apache Spark.

Let us go through the details about what you will be learning in this course. Keep in mind that the course is created with a lot of hands-on tasks which will give you enough practice using the right tools. Also, there are tons of tasks and exercises to evaluate yourself.

Setup of Single Node Big Data Cluster

Many of you would like to transition to Big Data from Conventional Technologies such as Mainframes, Oracle PL/SQL, etc and you might not have access to Big Data Clusters. It is very important for you set up the environment in the right manner. Don’t worry if you do not have the cluster handy, we will guide you through with support via Udemy Q&A.

Setup Ubuntu based AWS Cloud9 Instance with the right configuration

Ensure Docker is setup

Setup Jupyter Lab and other key components

Setup and Validate Hadoop, Hive, YARN, and Spark

A quick recap of Python

This course requires a decent knowledge of Python. To make sure you understand Spark from a Data Engineering perspective, we added a module to quickly warm up with Python. If you are not familiar with Python, then we suggest you go through our other course Data Engineering Essentials – Python, SQL, and Spark.

Data Engineering using Spark SQL

Let us, deep-dive into Spark SQL to understand how it can be used to build Data Engineering Pipelines. Spark with SQL will provide us the ability to leverage distributed computing capabilities of Spark coupled with easy-to-use developer-friendly SQL-style syntax.

Getting Started with Spark SQL

Basic Transformations using Spark SQL

Managing Spark Metastore Tables – Basic DDL and DML

Managing Spark Metastore Tables – DML and Partitioning

Overview of Spark SQL Functions

Windowing Functions using Spark SQL

Data Engineering using Spark Data Frame APIs

Spark Data Frame APIs are an alternative way of building Data Engineering applications at scale leveraging distributed computing capabilities of Spark. Data Engineers from application development backgrounds might prefer Data Frame APIs over Spark SQL to build Data Engineering applications.

Data Processing Overview using PySpark Data Frame APIs

Processing Column Data using PySpark Data Frame APIs

Basic Transformations using PySpark Data Frame APIs – Filtering, Aggregations, and Sorting

Joining Data Sets using PySpark Data Frame APIs

Windowing Functions using PySpark Data Frame APIs – Aggregations, Ranking, and Analytic Functions

Spark Metastore Databases and Tables

Apache Spark Application Development and Deployment Life Cycle

As Apache Spark-based Data engineers we should be familiar with Application Development and Deployment Lifecycle. As part of this section, you will learn the complete life cycle of the Development and Deployment Life cycle. It includes but is not limited to productionizing the code, externalizing the properties, reviewing the details of Spark Jobs, and many more.

Apache Spark Application Development Lifecycle using Python as Programming Language

Spark Application Execution Life Cycle and Spark UI

Setup SSH Proxy to access Spark Application logs

Deployment Modes of Spark Applications

Passing Application Properties Files and External Dependencies

All the demos are given on our state-of-the-art Big Data cluster. You can avail of one-month complimentary lab access by reaching out to [email protected] with a Udemy receipt.

Who this course is for:
Any IT aspirant/professional willing to learn Data Engineering using Apache Spark
Python Developers who want to learn Spark to add the key skill to be a Data Engineer

nitro.download/view/F0719A8C33BA75A/Apache-Spark-2-and-3-using-Python-3-Formerly-CCA-175.18.2.1.part01.rar
nitro.download/view/25F00F3BD2644FF/Apache-Spark-2-and-3-using-Python-3-Formerly-CCA-175.18.2.1.part02.rar
nitro.download/view/861BA82D191A69A/Apache-Spark-2-and-3-using-Python-3-Formerly-CCA-175.18.2.1.part03.rar
nitro.download/view/0C4C8A3F0BAAD39/Apache-Spark-2-and-3-using-Python-3-Formerly-CCA-175.18.2.1.part04.rar
nitro.download/view/7110D641F2739AF/Apache-Spark-2-and-3-using-Python-3-Formerly-CCA-175.18.2.1.part05.rar
nitro.download/view/B86738B91CAA452/Apache-Spark-2-and-3-using-Python-3-Formerly-CCA-175.18.2.1.part06.rar
nitro.download/view/A9176FF80B6CEEB/Apache-Spark-2-and-3-using-Python-3-Formerly-CCA-175.18.2.1.part07.rar
nitro.download/view/810EABF1E7C6E0E/Apache-Spark-2-and-3-using-Python-3-Formerly-CCA-175.18.2.1.part08.rar
nitro.download/view/3D564DD3F4790A9/Apache-Spark-2-and-3-using-Python-3-Formerly-CCA-175.18.2.1.part09.rar
nitro.download/view/8C8138A2FDE82AE/Apache-Spark-2-and-3-using-Python-3-Formerly-CCA-175.18.2.1.part10.rar

rapidgator.net/file/2acd493f7207217327949f5216f23a89/Apache-Spark-2-and-3-using-Python-3-Formerly-CCA-175.18.2.1.part01.rar.html
rapidgator.net/file/8cb928f43fbde7f8aaa20eb9ed062d98/Apache-Spark-2-and-3-using-Python-3-Formerly-CCA-175.18.2.1.part02.rar.html
rapidgator.net/file/5b9939db64a7558ebe9299a366784dae/Apache-Spark-2-and-3-using-Python-3-Formerly-CCA-175.18.2.1.part03.rar.html
rapidgator.net/file/790ea141c9624cc612a5fbfba56ba5ac/Apache-Spark-2-and-3-using-Python-3-Formerly-CCA-175.18.2.1.part04.rar.html
rapidgator.net/file/a23a0f045e1b6babf17e3203ad943d0e/Apache-Spark-2-and-3-using-Python-3-Formerly-CCA-175.18.2.1.part05.rar.html
rapidgator.net/file/3184cdeb979317233682a9da2376aac7/Apache-Spark-2-and-3-using-Python-3-Formerly-CCA-175.18.2.1.part06.rar.html
rapidgator.net/file/487d7b1b8dabeae96c14ccb1be206a0d/Apache-Spark-2-and-3-using-Python-3-Formerly-CCA-175.18.2.1.part07.rar.html
rapidgator.net/file/8e003396c408e78b2fa097a97b138ebe/Apache-Spark-2-and-3-using-Python-3-Formerly-CCA-175.18.2.1.part08.rar.html
rapidgator.net/file/6e9ac47d9a395a8994d5ea36a70d7a96/Apache-Spark-2-and-3-using-Python-3-Formerly-CCA-175.18.2.1.part09.rar.html
rapidgator.net/file/ecefde99791a4b7b6f9e07f2cd19965e/Apache-Spark-2-and-3-using-Python-3-Formerly-CCA-175.18.2.1.part10.rar.html

If any links die or problem unrar, send request to
forms.gle/e557HbjJ5vatekDV9

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.