Category: spark

Spark User full article

AWS Glue @

What are the Python libraries provided by AWS Glue Version 2.0

The defaults Python libraries available in AWS Glue version 2.0 are as below boto3==1.12.4 botocore==1.15.4 certifi==2019.11.28 chardet==3.0.4 cycler==0.10.0 Cython==0.29.15 docutils==0.15.2…

PySpark @

AWS Glue : Example on how to read a sample csv file with PySpark

Here assume that you have your CSV data in AWS S3 bucket. The next step is the crawl the data…

PySpark @

How to renaming Spark Dataframe having a complex schema with AWS Glue – PySpark

There can be multiple reason to rename the Spark Data frame . Even though withColumnRenamed can be used to rename…

PySpark @

PySpark how to get rows having nulls for a column or columns without nulls or count of Non null

pyspark.sql.Column.isNotNull isNotNull() : True if the current expression is NOT null. isNull() : True if the current expression is null. With…

PySpark @

PySpark – groupby with aggregation (count, sum, mean, min, max)

pyspark.sql.DataFrame.groupBy PySpark groupby functions groups the DataFrame using the specified columns to run aggregation ( count,sum,mean, min, max) on them….

PySpark @

PySpark filter : How to filter data in Pyspark – Multiple options explained.

pyspark.sql.DataFrame.filter PySpark filter function is used to filter the data in a Spark Data Frame, in short used to cleansing…

PySpark @

How to concatenate multiple columns in a Spark dataframe

concat_ws : With concat_ws () function you can  concatenates multiple input string columns together into a single string column, using…

PySpark @

PySpark-How to create and RDD from a List and from AWS S3

In this article you will learn , what an RDD is ?  How can we create an RDD from a…

PySpark @

How to run dataframe as Spark SQL – PySpark

If you have a situation that you can easily get the result using SQL/ SQL already existing , then you…