HiveContext is a Spark SQL module that allows you to work with Hive data in…
Category: spark
Spark User full article
Learn about PySparks broadcast variable with example
In PySpark, the broadcast variable is used to cache a read-only variable on all the worker nodes, which can be…
PySpark : Removing all occurrences of a specified element from an array column in a DataFrame
pyspark.sql.functions.array_remove Syntax pyspark.sql.functions.array_remove(col, element) pyspark.sql.functions.array_remove is a function that removes all occurrences of a specified element from an array column…
PySpark : Finding the position of a given value in an array column.
pyspark.sql.functions.array_position The array_position function is used to find the position of a given value in an array column. This is…
PySpark : Find the minimum value in an array column of a DataFrame
pyspark.sql.functions.array_min The array_min function is a built-in function in Pyspark that finds the minimum value in an array column of…
PySpark : Find the maximum value in an array column of a DataFrame
pyspark.sql.functions.array_max The array_max function is a built-in function in Pyspark that finds the maximum value in an array column of…
PySpark : Concatenatinating elements of an array into a single string.
pyspark.sql.functions.array_join PySpark’s array_join function is used to concatenate elements of an array into a single string, with the elements separated…
Connecting to Snowflake from PySpark – Example included
Connecting to Snowflake from PySpark involves several steps: Install the Snowflake connector for Python by running “pip install snowflake-connector-python” in…
PySpark:Getting approximate number of unique elements in a column of a DataFrame
pyspark.sql.functions.approx_count_distinct Pyspark’s approx_count_distinct function is a way to approximate the number of unique elements in a column of a DataFrame….
Utilize the power of Pandas library with PySpark dataframes.
pyspark.sql.functions.pandas_udf PySpark’s PandasUDFType is a type of user-defined function (UDF) that allows you to use the power of Pandas library…
Pyspark, how to format the number X to a format like ‘#,–#,–#.–’, rounded to d decimal places
pyspark.sql.functions.format_number The format_number function is used to format a number as a string. The function takes two arguments: the number…