Using Spark With SQream
If you are using Spark for distributed processing and analysis and wish to use it with SQream, follow these instructions.
Installation and Configuration
Before You Begin
To use Spark with SQream, you must have the following installed:
SQream version 2022.1.8 or later
Spark version 3.3.1 or later
SQream Spark Connector 1.0.0
JDBC version 4.5.6 or later
JDBC
If JDBC is not yet configured, follow the JDBC Client Drivers page for guidance in registring and configuring.
Connecting Spark to SQream
The SQream-Spark Connector enables inserting DataFrames into SQream tables and exporting tables or queries as DataFrames for use with Spark. DataFrames are Spark objects used for transferring data from one data source to another.
In the Spark Shell, run:
./spark-shell --driver-class-path {driver path} --jars {Spark-Sqream-Connector.jar path}
Example:
./spark-shell --driver-class-path /home/sqream/sqream-jdbc-4.5.6.jar --jars Spark-Sqream-Connector-1.0.jar
Connector Configuration
The Spark JDBC connection properties allow users to configure connections between Spark and databases. These properties enable database access, query execution, and result retrieval, as well as authentication, encryption, and connection pooling.
The following Spark connection properties are supported by SQream:
Item |
Default |
Description |
---|---|---|
|
The JDBC URL |
|
|
A JDBC table to read from or write to. When reading from a |
|
|
The |
|
|
The class name of the JDBC driver to use to connect to this URL. |
|
|
The maximum number of partitions that can be used for parallelism in table reading and writing. This also determines the maximum number of concurrent JDBC connections. If the number of partitions to write exceeds this limit, we decrease it to this limit by calling |
|
|
0 |
The number of seconds the driver will wait for a Statement object to execute to the given number of seconds. Zero means there is no limit. In the write path, this option depends on how JDBC drivers implement the API |
|
1 |
The JDBC fetch size, which determines how many rows to fetch per round trip. This can help performance on JDBC drivers which default to low fetch size (e.g. Oracle with 10 rows). |
|
1000000 |
The JDBC batch size, which determines how many rows to insert per round trip. This can help performance on JDBC drivers. This option applies only to writing. |
|
After each database session is opened to the remote DB and before starting to read data, this option executes a custom SQL statement (or a PL/SQL block). Use this to implement session initialization code. Example: |
|
|
|
This is a JDBC writer related option. When |
|
the default cascading truncate behaviour of the JDBC database in question, specified in the |
This is a JDBC writer related option. If enabled and supported by the JDBC database (PostgreSQL and Oracle at the moment), this options allows execution of a |
|
This is a JDBC writer related option. If specified, this option allows setting of database-specific table and partition options when creating a table (e.g., |
|
|
The database column data types to use instead of the defaults, when creating the table. Data type information should be specified in the same format as CREATE TABLE columns syntax (e.g: |
|
|
The custom schema to use for reading data from JDBC connectors. For example, |
|
|
|
The option to enable or disable predicate push-down into the JDBC data source. The default value is true, in which case Spark will push down filters to the JDBC data source as much as possible. Otherwise, if set to false, no filter will be pushed down to the JDBC data source and thus all filters will be handled by Spark. Predicate push-down is usually turned off when the predicate filtering is performed faster by Spark than by the JDBC data source. |
|
|
The option to enable or disable aggregate push-down in V2 JDBC data source. The default value is false, in which case Spark will not push down aggregates to the JDBC data source. Otherwise, if sets to true, aggregates will be pushed down to the JDBC data source. Aggregate push-down is usually turned off when the aggregate is performed faster by Spark than by the JDBC data source. Please note that aggregates can be pushed down if and only if all the aggregate functions and the related filters can be pushed down. If |
|
|
The option to enable or disable LIMIT push-down into V2 JDBC data source. The LIMIT push-down also includes LIMIT + SORT , a.k.a. the Top N operator. The default value is false, in which case Spark does not push down LIMIT or LIMIT with SORT to the JDBC data source. Otherwise, if sets to true, LIMIT or LIMIT with SORT is pushed down to the JDBC data source. If |
|
|
The option to enable or disable TABLESAMPLE push-down into V2 JDBC data source. The default value is false, in which case Spark does not push down TABLESAMPLE to the JDBC data source. Otherwise, if value sets to true, TABLESAMPLE is pushed down to the JDBC data source. |
|
The name of the JDBC connection provider to use to connect to this URL, e.g. |
Transferring Data From SQream to Spark
In the Spark UI, configure Spark to write to the SQream database.
From the SqlContext object, use the read() method to construct a DataFrameReader.
Use the format() method to specify SQREAM_SOURCE_NAME.
Use either the option() or options() method to specify the connector options.
Specify one of the following options for reading tables:
dbtable: The name of the table to be read. All columns and records are retrieved (i.e. it is equivalent to
SELECT * FROM db_table
).query: The exact query (SELECT statement) to run.
Examples
To read an entire table:
val df: DataFrame = sqlContext.read .format(SQREAM_SOURCE_NAME) .options(sfOptions) .option("<sqream_table_name>", "<table_name>") .load()
To read query results:
val df: DataFrame = sqlContext.read .format(SQREAM_SOURCE_NAME) .options(sfOptions) .option("query", "<EXECUTED_QUERY> <table_name>") .load()
Transferring data From Spark to SQream
In the Spark UI, configure Spark to read from the SQream database.
Use the write() method of the DataFrame to construct a DataFrameWriter.
Specify SQREAM_SOURCE_NAME using the format() method.
Specify the connector options using either the option() or options() method.
Use the dbtable option to specify the table to which data is written.
Use the mode() method to specify the save mode for the content.
Examples
To read an entire table:
df.write .format(SQREAM_SOURCE_NAME) .options(sfOptions) .option("<sqream_table_name>", "<table_name>") .mode(SaveMode.Overwrite) .save()
Supported Data Types and Mapping
SQream data types mapped to Spark
SQream |
Spark |
---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Spark data types mapped to SQream
Spark |
SQream |
---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Examples
JAVA
import com.sqream.driver.SqreamSession;
import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Row;
import java.util.HashMap;
public class main {
public static void main(String[] args) {
HashMap<String, String> config = new HashMap<>();
//spark configuration
//optional configuration here: https://spark.apache.org/docs/latest/configuration.html
config.put("spark.master", "local");
SqreamSession sqreamSession = SqreamSession.getSession(config);
//spark properties
//optional properties here: https://spark.apache.org/docs/latest/sql-data-sources-jdbc.html
HashMap<String, String> props = new HashMap<>();
props.put("url", "jdbc:Sqream://192.168.4.51:5000/master;user=sqream;password=sqream;cluster=false;logfile=logsFiles.txt;loggerlevel=DEBUG");
props.put("dbtable", "test");
/*Read from sqream table*/
Dataset<Row> dataFrame = sqreamSession.read(props);
/*Added to sqream table*/
sqreamSession.write(dataFrame, props);
}
}