PrefixSpan¶
- 
class pyspark.ml.fpm.PrefixSpan(*, minSupport: float = 0.1, maxPatternLength: int = 10, maxLocalProjDBSize: int = 32000000, sequenceCol: str = 'sequence')[source]¶
- A parallel PrefixSpan algorithm to mine frequent sequential patterns. The PrefixSpan algorithm is described in J. Pei, et al., PrefixSpan: Mining Sequential Patterns Efficiently by Prefix-Projected Pattern Growth (see here). This class is not yet an Estimator/Transformer, use - findFrequentSequentialPatterns()method to run the PrefixSpan algorithm.- New in version 2.4.0. - Notes - See Sequential Pattern Mining (Wikipedia) - Examples - >>> from pyspark.ml.fpm import PrefixSpan >>> from pyspark.sql import Row >>> df = sc.parallelize([Row(sequence=[[1, 2], [3]]), ... Row(sequence=[[1], [3, 2], [1, 2]]), ... Row(sequence=[[1, 2], [5]]), ... Row(sequence=[[6]])]).toDF() >>> prefixSpan = PrefixSpan() >>> prefixSpan.getMaxLocalProjDBSize() 32000000 >>> prefixSpan.getSequenceCol() 'sequence' >>> prefixSpan.setMinSupport(0.5) PrefixSpan... >>> prefixSpan.setMaxPatternLength(5) PrefixSpan... >>> prefixSpan.findFrequentSequentialPatterns(df).sort("sequence").show(truncate=False) +----------+----+ |sequence |freq| +----------+----+ |[[1]] |3 | |[[1], [3]]|2 | |[[2]] |3 | |[[2, 1]] |3 | |[[3]] |2 | +----------+----+ ... - Methods - clear(param)- Clears a param from the param map if it has been explicitly set. - copy([extra])- Creates a copy of this instance with the same uid and some extra params. - explainParam(param)- Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string. - Returns the documentation of all params with their optionally default values and user-supplied values. - extractParamMap([extra])- Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra. - findFrequentSequentialPatterns(dataset)- Finds the complete set of frequent sequential patterns in the input sequences of itemsets. - Gets the value of maxLocalProjDBSize or its default value. - Gets the value of maxPatternLength or its default value. - Gets the value of minSupport or its default value. - getOrDefault(param)- Gets the value of a param in the user-supplied param map or its default value. - getParam(paramName)- Gets a param by its name. - Gets the value of sequenceCol or its default value. - hasDefault(param)- Checks whether a param has a default value. - hasParam(paramName)- Tests whether this instance contains a param with a given (string) name. - isDefined(param)- Checks whether a param is explicitly set by user or has a default value. - isSet(param)- Checks whether a param is explicitly set by user. - set(param, value)- Sets a parameter in the embedded param map. - setMaxLocalProjDBSize(value)- Sets the value of - maxLocalProjDBSize.- setMaxPatternLength(value)- Sets the value of - maxPatternLength.- setMinSupport(value)- Sets the value of - minSupport.- setParams(self, \*[, minSupport, …])- New in version 2.4.0. - setSequenceCol(value)- Sets the value of - sequenceCol.- Attributes - Returns all params ordered by name. - Methods Documentation - 
clear(param: pyspark.ml.param.Param) → None¶
- Clears a param from the param map if it has been explicitly set. 
 - 
copy(extra: Optional[ParamMap] = None) → JP¶
- Creates a copy of this instance with the same uid and some extra params. This implementation first calls Params.copy and then make a copy of the companion Java pipeline component with extra params. So both the Python wrapper and the Java pipeline component get copied. - Parameters
- extradict, optional
- Extra parameters to copy to the new instance 
 
- Returns
- JavaParams
- Copy of this instance 
 
 
 - 
explainParam(param: Union[str, pyspark.ml.param.Param]) → str¶
- Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string. 
 - 
explainParams() → str¶
- Returns the documentation of all params with their optionally default values and user-supplied values. 
 - 
extractParamMap(extra: Optional[ParamMap] = None) → ParamMap¶
- Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra. - Parameters
- extradict, optional
- extra param values 
 
- Returns
- dict
- merged param map 
 
 
 - 
findFrequentSequentialPatterns(dataset: pyspark.sql.dataframe.DataFrame) → pyspark.sql.dataframe.DataFrame[source]¶
- Finds the complete set of frequent sequential patterns in the input sequences of itemsets. - New in version 2.4.0. - Parameters
- datasetpyspark.sql.DataFrame
- A dataframe containing a sequence column which is ArrayType(ArrayType(T)) type, T is the item type for the input dataset. 
 
- dataset
- Returns
- pyspark.sql.DataFrame
- A DataFrame that contains columns of sequence and corresponding frequency. The schema of it will be: - sequence: ArrayType(ArrayType(T)) (T is the item type) 
- freq: Long 
 
 
 
 - 
getMaxLocalProjDBSize() → int[source]¶
- Gets the value of maxLocalProjDBSize or its default value. - New in version 3.0.0. 
 - 
getMaxPatternLength() → int[source]¶
- Gets the value of maxPatternLength or its default value. - New in version 3.0.0. 
 - 
getMinSupport() → float[source]¶
- Gets the value of minSupport or its default value. - New in version 3.0.0. 
 - 
getOrDefault(param: Union[str, pyspark.ml.param.Param[T]]) → Union[Any, T]¶
- Gets the value of a param in the user-supplied param map or its default value. Raises an error if neither is set. 
 - 
getParam(paramName: str) → pyspark.ml.param.Param¶
- Gets a param by its name. 
 - 
getSequenceCol() → str[source]¶
- Gets the value of sequenceCol or its default value. - New in version 3.0.0. 
 - 
hasDefault(param: Union[str, pyspark.ml.param.Param[Any]]) → bool¶
- Checks whether a param has a default value. 
 - 
hasParam(paramName: str) → bool¶
- Tests whether this instance contains a param with a given (string) name. 
 - 
isDefined(param: Union[str, pyspark.ml.param.Param[Any]]) → bool¶
- Checks whether a param is explicitly set by user or has a default value. 
 - 
isSet(param: Union[str, pyspark.ml.param.Param[Any]]) → bool¶
- Checks whether a param is explicitly set by user. 
 - 
set(param: pyspark.ml.param.Param, value: Any) → None¶
- Sets a parameter in the embedded param map. 
 - 
setMaxLocalProjDBSize(value: int) → pyspark.ml.fpm.PrefixSpan[source]¶
- Sets the value of - maxLocalProjDBSize.- New in version 3.0.0. 
 - 
setMaxPatternLength(value: int) → pyspark.ml.fpm.PrefixSpan[source]¶
- Sets the value of - maxPatternLength.- New in version 3.0.0. 
 - 
setMinSupport(value: float) → pyspark.ml.fpm.PrefixSpan[source]¶
- Sets the value of - minSupport.- New in version 3.0.0. 
 - 
setParams(self, \*, minSupport=0.1, maxPatternLength=10, maxLocalProjDBSize=32000000, sequenceCol="sequence")[source]¶
- New in version 2.4.0. 
 - 
setSequenceCol(value: str) → pyspark.ml.fpm.PrefixSpan[source]¶
- Sets the value of - sequenceCol.- New in version 3.0.0. 
 - Attributes Documentation - 
maxLocalProjDBSize: pyspark.ml.param.Param[int] = Param(parent='undefined', name='maxLocalProjDBSize', doc='The maximum number of items (including delimiters used in the internal storage format) allowed in a projected database before local processing. If a projected database exceeds this size, another iteration of distributed prefix growth is run. Must be > 0.')¶
 - 
maxPatternLength: pyspark.ml.param.Param[int] = Param(parent='undefined', name='maxPatternLength', doc='The maximal length of the sequential pattern. Must be > 0.')¶
 - 
minSupport: pyspark.ml.param.Param[float] = Param(parent='undefined', name='minSupport', doc='The minimal support level of the sequential pattern. Sequential pattern that appears more than (minSupport * size-of-the-dataset) times will be output. Must be >= 0.')¶
 - 
params¶
- Returns all params ordered by name. The default implementation uses - dir()to get all attributes of type- Param.
 - 
sequenceCol: pyspark.ml.param.Param[str] = Param(parent='undefined', name='sequenceCol', doc='The name of the sequence column in dataset, rows with nulls in this column are ignored.')¶
 
-