Spark DataSet - Partition

> Database > Spark > Spark - DataSet

1 - About

Spark Engine - Partition in Spark

Partitions:

The num of Partitions dictate the number of tasks that are launched.

The computation is taking place on one node if the number of partition is one.

Advertising

3 - Number of partition

3.1 - On read

The no. of partitions is determined by spark.sql.files.maxPartitionBytes parameter, which is set to 128 MB, by default. This configuration determines the maximum number of bytes to pack into a single partition when (reading|writing ?) files.

134217728 = 128 Mb 

So if you are reading a file of size 1GB with a maxPartitionBytes of 128 Mb, it creates 10 partitions.

The value of 128Mb is the default HDFS block size and file size should be greater than it. (Aim for around 1GB per file)

3.2 - In Memory

The number of partitions can be changed after with the methods:

    • equally distributed chunks thanks to hash
    • perform a shuffle of the data and create partitions.
  • and coalesce
    • only to reduce the number of partitions - move data partition wise to existing another partitions. Unlike repartition, coalesce doesn’t perform a shuffle to create the partitions. the data from a partition is removed and appended to another partition

4 - Management

4.1 - Write

Partitions the output by the given columns on the file system.

Advertising

4.2 - Apply function

  • foreachPartition(func) - Runs func on each partition of this Dataset.
  • mapPartitions - Returns a new Dataset that contains the result of applying func to each partition. Experimental

4.3 - Sort

  • sortWithinPartitions - Returns a new Dataset with each partition sorted by the given expressions. This is the same operation as “SORT BY” in SQL (Hive QL).

4.4 - Coalesce

  • coalesce - reduce the number of partitions - if you go from 1000 partitions to 100 partitions, there will not be a shuffle, instead each of the 100 new partitions will claim 10 of the current partitions.

Example:

// coalesce the data to a single file.
data.coalesce(1).write

4.5 - Repartition

Repartition creates partitions based on the user’s input by performing a full shuffle on the data (after read)

If the number of partitions is not specified, the number is taken from spark.sql.shuffle.partitions.

Returns a new Dataset partitioned by the given partitioning expressions.

Same as DISTRIBUTE BY in SQL.

All rows with the same Distribute By columns will go to the same reducer. However, Distribute By does not guarantee clustering or sorting properties on the distributed keys.

Advertising

5 - Support

5.1 - Number of dynamic partitions created is XXXX, which is more than 1000.

Number of dynamic partitions created is 2100, which is more than 1000. 
To solve this try to set hive.exec.max.dynamic.partitions to at least 2100.;

The below configuration must be set before starting the spark application

spark.hadoop.hive.exec.max.dynamic.partitions

A set with Spark SQL Server will not work. You need to set the configuration at the start of the server.

from https://github.com/apache/spark/pull/18769

6 - Documentation / Reference