PySpark : Understanding PySpark’s LAG and LEAD Window Functions with detailed examples

PySpark @ Freshers.in

One of its powerful features is the ability to work with window functions, which allow for complex calculations and data manipulation tasks. In this article, we will focus on two common window functions in PySpark: LAG and LEAD. We will discuss their functionality, syntax, and provide a detailed example with input data to illustrate their usage.

  1. LAG and LEAD Window Functions in PySpark

LAG and LEAD are window functions used to access the previous (LAG) or the next (LEAD) row in a result set, allowing you to perform calculations or comparisons across rows. These functions can be especially useful for time series analysis or when working with ordered data.

Syntax:

LAG(column, offset=1, default=None)
LEAD(column, offset=1, default=None)
column: The column or expression to apply the LAG or LEAD function on.
offset: The number of rows to look behind (LAG) or ahead (LEAD) from the current row (default is 1).
default: The value to return when no previous or next row exists. If not specified, it returns NULL.
  1. A Detailed Example of Using LAG and LEAD Functions

Let’s create a PySpark DataFrame with sales data and apply LAG and LEAD functions to calculate the previous and next month’s sales, respectively.

First, let’s import the necessary libraries and create a sample DataFrame:

from pyspark.sql import SparkSession
from pyspark.sql.functions import to_date
from pyspark.sql.types import StringType, IntegerType, DateType
from pyspark.sql.window import Window
# Create a Spark session
spark = SparkSession.builder.master("local").appName("LAG and LEAD Functions Example").getOrCreate()
# Sample data
data = [("2023-01-01", 100), ("2023-02-01", 200), ("2023-03-01", 300), ("2023-04-01", 400)]
# Define the schema
schema = ["Date", "Sales"]
# Create the DataFrame
df = spark.createDataFrame(data, schema)
# Convert the date string to date type
df = df.withColumn("Date", to_date(df["Date"], "yyyy-MM-dd"))

Now that we have our DataFrame, let’s apply the LAG and LEAD functions using a Window specification:

from pyspark.sql.functions import lag, lead

# Define the window specification
window_spec = Window.orderBy("Date")

# Apply the LAG and LEAD functions
df = df.withColumn("Previous Month Sales", lag(df["Sales"]).over(window_spec))
df = df.withColumn("Next Month Sales", lead(df["Sales"]).over(window_spec))

# Show the results
df.show()

This will have the following output:

+----------+-----+--------------------+----------------+
|      Date|Sales|Previous Month Sales|Next Month Sales|
+----------+-----+--------------------+----------------+
|2023-01-01|  100|                null|             200|
|2023-02-01|  200|                 100|             300|
|2023-03-01|  300|                 200|             400|
|2023-04-01|  400|                 300|            null|
+----------+-----+--------------------+----------------+

In this example, we used the LAG function to obtain the sales from the previous month and the LEAD

Spark important urls to refer

  1. Spark Examples
  2. PySpark Blogs
  3. Bigdata Blogs
  4. Spark Interview Questions
  5. Official Page
Author: user

Leave a Reply