Interface JobInfo
-
- All Known Implementing Classes:
CassandraJobInfo
public interface JobInfoProvides job-specific configuration and information for bulk write operations.This interface does NOT extend Serializable. JobInfo instances are never serialized. For broadcast to executors,
BroadcastableJobInfois used instead, and executors reconstruct JobInfo instances from the broadcast data.
-
-
Method Summary
All Methods Instance Methods Abstract Methods Default Methods Modifier and Type Method Description CoordinatedWriteConfcoordinatedWriteConf()DigestAlgorithmSupplierdigestAlgorithmSupplier()inteffectiveSidecarPort()intgetCommitBatchSize()intgetCommitThreadsPerInstance()java.lang.StringgetConfiguredJobId()An optional unique identified supplied in spark configurationConsistencyLevelgetConsistencyLevel()default java.lang.StringgetId()java.lang.StringgetLocalDC()java.util.UUIDgetRestoreJobId()return the identifier of the restore job created on Cassandra Sidecarjava.util.UUIDgetRestoreJobId(java.lang.String clusterId)Returns the restore job identifier on Cassandra Sidecar of the cluster identified by the clusterId The method should be called in the coordinated write code path.booleangetSkipClean()TokenPartitionergetTokenPartitioner()doubleimportCoordinatorTimeoutMultiplier()default booleanisCoordinatedWriteEnabled()intjobKeepAliveMinutes()longjobTimeoutSeconds()org.apache.cassandra.spark.data.QualifiedTableNamequalifiedTableName()booleanskipExtendedVerify()intsstableDataSizeInMiB()DataTransportInfotransportInfo()
-
-
-
Method Detail
-
getConsistencyLevel
ConsistencyLevel getConsistencyLevel()
-
getLocalDC
@Nullable java.lang.String getLocalDC()
-
sstableDataSizeInMiB
int sstableDataSizeInMiB()
- Returns:
- the max sstable data file size in mebibytes
-
getCommitBatchSize
int getCommitBatchSize()
-
getCommitThreadsPerInstance
int getCommitThreadsPerInstance()
-
getRestoreJobId
java.util.UUID getRestoreJobId()
return the identifier of the restore job created on Cassandra Sidecar- Returns:
- time-based uuid
-
getRestoreJobId
java.util.UUID getRestoreJobId(@Nullable java.lang.String clusterId) throws java.util.NoSuchElementExceptionReturns the restore job identifier on Cassandra Sidecar of the cluster identified by the clusterId The method should be called in the coordinated write code path.- Parameters:
clusterId- identifies the Cassandra cluster- Returns:
- restore job identifier, a time-based uuid
- Throws:
java.util.NoSuchElementException- when there is no restoreJobId associated with the clusterId
-
getConfiguredJobId
@Nullable java.lang.String getConfiguredJobId()
An optional unique identified supplied in spark configuration- Returns:
- a id string or null
-
getId
default java.lang.String getId()
-
getTokenPartitioner
TokenPartitioner getTokenPartitioner()
-
skipExtendedVerify
boolean skipExtendedVerify()
-
getSkipClean
boolean getSkipClean()
-
digestAlgorithmSupplier
@NotNull DigestAlgorithmSupplier digestAlgorithmSupplier()
- Returns:
- the digest type provider for the bulk job, and used to calculate digests for SSTable components
-
qualifiedTableName
org.apache.cassandra.spark.data.QualifiedTableName qualifiedTableName()
-
transportInfo
DataTransportInfo transportInfo()
-
jobKeepAliveMinutes
int jobKeepAliveMinutes()
- Returns:
- job keep alive time in minutes
-
jobTimeoutSeconds
long jobTimeoutSeconds()
- Returns:
- job timeout in seconds; see
WriterOptions.JOB_TIMEOUT_SECONDS
-
effectiveSidecarPort
int effectiveSidecarPort()
- Returns:
- sidecar service port
-
importCoordinatorTimeoutMultiplier
double importCoordinatorTimeoutMultiplier()
- Returns:
- multiplier to calculate the final timeout for import coordinator
-
coordinatedWriteConf
@Nullable CoordinatedWriteConf coordinatedWriteConf()
- Returns:
- CoordinatedWriteConf if configured, null otherwise
-
isCoordinatedWriteEnabled
default boolean isCoordinatedWriteEnabled()
- Returns:
- true if coordinated write is enabled, i.e. coordinatedWriteConf() returns non-null value; false, otherwise
-
-