Tag: Spark_Interview

PySpark @ Freshers.in

PySpark : Dropping duplicate rows in Pyspark – A Comprehensive Guide with example

PySpark provides several methods to remove duplicate rows from a dataframe. In this article, we will go over the steps…

PySpark @ Freshers.in

PySpark : Replacing null column in a PySpark dataframe to 0 or any value you wish.

To replace null values in a PySpark DataFrame column that contain null with a numeric value (e.g., 0), you can…

PySpark @ Freshers.in

PySpark : unix_timestamp function – A comprehensive guide

One of the key functionalities of PySpark is the ability to transform data into the desired format. In some cases,…

PySpark @ Freshers.in

PySpark : Reading parquet file stored on Amazon S3 using PySpark

To read a Parquet file stored on Amazon S3 using PySpark, you can use the following code: from pyspark.sql import…

PySpark @ Freshers.in

PySpark : Setting PySpark parameters – A complete Walkthru [3 Ways]

In PySpark, you can set various parameters to configure your Spark application. These parameters can be set in different ways…

PySpark @ Freshers.in

Spark : Calculation of executor memory in Spark – A complete info.

The executor memory is the amount of memory allocated to each executor in a Spark cluster. It determines the amount…

PySpark @ Freshers.in

PySpark : PySpark program to write DataFrame to Snowflake table.

Overview of Snowflake and PySpark. Snowflake is a cloud-based data warehousing platform that allows users to store and analyze large…

Hive @ Freshers.in

Hive : Hive optimizer – Detailed walk through

Hive is a popular open-source data warehouse system that allows users to store, manage, and analyze large datasets using SQL-like…

Hive @ Freshers.in

Hive : Difference between the Tez execution engine and the Spark execution engine in Hive

Hive is a data warehousing tool built on top of Hadoop, which allows us to write SQL-like queries on large…