LDA¶
- 
class pyspark.ml.clustering.LDA(*, featuresCol: str = 'features', maxIter: int = 20, seed: Optional[int] = None, checkpointInterval: int = 10, k: int = 10, optimizer: str = 'online', learningOffset: float = 1024.0, learningDecay: float = 0.51, subsamplingRate: float = 0.05, optimizeDocConcentration: bool = True, docConcentration: Optional[List[float]] = None, topicConcentration: Optional[float] = None, topicDistributionCol: str = 'topicDistribution', keepLastCheckpoint: bool = True)[source]¶
- Latent Dirichlet Allocation (LDA), a topic model designed for text documents. - Terminology: - “term” = “word”: an element of the vocabulary 
- “token”: instance of a term appearing in a document 
- “topic”: multinomial distribution over terms representing some concept 
- “document”: one piece of text, corresponding to one row in the input data 
 - Original LDA paper (journal version):
- Blei, Ng, and Jordan. “Latent Dirichlet Allocation.” JMLR, 2003. 
 - Input data (featuresCol): LDA is given a collection of documents as input data, via the featuresCol parameter. Each document is specified as a - Vectorof length vocabSize, where each entry is the count for the corresponding term (word) in the document. Feature transformers such as- pyspark.ml.feature.Tokenizerand- pyspark.ml.feature.CountVectorizercan be useful for converting text to word count vectors.- New in version 2.0.0. - Examples - >>> from pyspark.ml.linalg import Vectors, SparseVector >>> from pyspark.ml.clustering import LDA >>> df = spark.createDataFrame([[1, Vectors.dense([0.0, 1.0])], ... [2, SparseVector(2, {0: 1.0})],], ["id", "features"]) >>> lda = LDA(k=2, seed=1, optimizer="em") >>> lda.setMaxIter(10) LDA... >>> lda.getMaxIter() 10 >>> lda.clear(lda.maxIter) >>> model = lda.fit(df) >>> model.setSeed(1) DistributedLDAModel... >>> model.getTopicDistributionCol() 'topicDistribution' >>> model.isDistributed() True >>> localModel = model.toLocal() >>> localModel.isDistributed() False >>> model.vocabSize() 2 >>> model.describeTopics().show() +-----+-----------+--------------------+ |topic|termIndices| termWeights| +-----+-----------+--------------------+ | 0| [1, 0]|[0.50401530077160...| | 1| [0, 1]|[0.50401530077160...| +-----+-----------+--------------------+ ... >>> model.topicsMatrix() DenseMatrix(2, 2, [0.496, 0.504, 0.504, 0.496], 0) >>> lda_path = temp_path + "/lda" >>> lda.save(lda_path) >>> sameLDA = LDA.load(lda_path) >>> distributed_model_path = temp_path + "/lda_distributed_model" >>> model.save(distributed_model_path) >>> sameModel = DistributedLDAModel.load(distributed_model_path) >>> local_model_path = temp_path + "/lda_local_model" >>> localModel.save(local_model_path) >>> sameLocalModel = LocalLDAModel.load(local_model_path) >>> model.transform(df).take(1) == sameLocalModel.transform(df).take(1) True - Methods - clear(param)- Clears a param from the param map if it has been explicitly set. - copy([extra])- Creates a copy of this instance with the same uid and some extra params. - explainParam(param)- Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string. - Returns the documentation of all params with their optionally default values and user-supplied values. - extractParamMap([extra])- Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra. - fit(dataset[, params])- Fits a model to the input dataset with optional parameters. - fitMultiple(dataset, paramMaps)- Fits a model to the input dataset for each param map in paramMaps. - Gets the value of checkpointInterval or its default value. - Gets the value of - docConcentrationor its default value.- Gets the value of featuresCol or its default value. - getK()- Gets the value of - kor its default value.- Gets the value of - keepLastCheckpointor its default value.- Gets the value of - learningDecayor its default value.- Gets the value of - learningOffsetor its default value.- Gets the value of maxIter or its default value. - Gets the value of - optimizeDocConcentrationor its default value.- Gets the value of - optimizeror its default value.- getOrDefault(param)- Gets the value of a param in the user-supplied param map or its default value. - getParam(paramName)- Gets a param by its name. - getSeed()- Gets the value of seed or its default value. - Gets the value of - subsamplingRateor its default value.- Gets the value of - topicConcentrationor its default value.- Gets the value of - topicDistributionColor its default value.- hasDefault(param)- Checks whether a param has a default value. - hasParam(paramName)- Tests whether this instance contains a param with a given (string) name. - isDefined(param)- Checks whether a param is explicitly set by user or has a default value. - isSet(param)- Checks whether a param is explicitly set by user. - load(path)- Reads an ML instance from the input path, a shortcut of read().load(path). - read()- Returns an MLReader instance for this class. - save(path)- Save this ML instance to the given path, a shortcut of ‘write().save(path)’. - set(param, value)- Sets a parameter in the embedded param map. - setCheckpointInterval(value)- Sets the value of - checkpointInterval.- setDocConcentration(value)- Sets the value of - docConcentration.- setFeaturesCol(value)- Sets the value of - featuresCol.- setK(value)- Sets the value of - k.- setKeepLastCheckpoint(value)- Sets the value of - keepLastCheckpoint.- setLearningDecay(value)- Sets the value of - learningDecay.- setLearningOffset(value)- Sets the value of - learningOffset.- setMaxIter(value)- Sets the value of - maxIter.- setOptimizeDocConcentration(value)- Sets the value of - optimizeDocConcentration.- setOptimizer(value)- Sets the value of - optimizer.- setParams(self, \*[, featuresCol, maxIter, …])- Sets params for LDA. - setSeed(value)- Sets the value of - seed.- setSubsamplingRate(value)- Sets the value of - subsamplingRate.- setTopicConcentration(value)- Sets the value of - topicConcentration.- setTopicDistributionCol(value)- Sets the value of - topicDistributionCol.- write()- Returns an MLWriter instance for this ML instance. - Attributes - Returns all params ordered by name. - Methods Documentation - 
clear(param: pyspark.ml.param.Param) → None¶
- Clears a param from the param map if it has been explicitly set. 
 - 
copy(extra: Optional[ParamMap] = None) → JP¶
- Creates a copy of this instance with the same uid and some extra params. This implementation first calls Params.copy and then make a copy of the companion Java pipeline component with extra params. So both the Python wrapper and the Java pipeline component get copied. - Parameters
- extradict, optional
- Extra parameters to copy to the new instance 
 
- Returns
- JavaParams
- Copy of this instance 
 
 
 - 
explainParam(param: Union[str, pyspark.ml.param.Param]) → str¶
- Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string. 
 - 
explainParams() → str¶
- Returns the documentation of all params with their optionally default values and user-supplied values. 
 - 
extractParamMap(extra: Optional[ParamMap] = None) → ParamMap¶
- Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra. - Parameters
- extradict, optional
- extra param values 
 
- Returns
- dict
- merged param map 
 
 
 - 
fit(dataset: pyspark.sql.dataframe.DataFrame, params: Union[ParamMap, List[ParamMap], Tuple[ParamMap], None] = None) → Union[M, List[M]]¶
- Fits a model to the input dataset with optional parameters. - New in version 1.3.0. - Parameters
- datasetpyspark.sql.DataFrame
- input dataset. 
- paramsdict or list or tuple, optional
- an optional param map that overrides embedded params. If a list/tuple of param maps is given, this calls fit on each param map and returns a list of models. 
 
- dataset
- Returns
- :py:class:`Transformer` or a list ofpy:class:Transformer
- fitted model(s) 
 
 
 - 
fitMultiple(dataset: pyspark.sql.dataframe.DataFrame, paramMaps: Sequence[ParamMap]) → Iterator[Tuple[int, M]]¶
- Fits a model to the input dataset for each param map in paramMaps. - New in version 2.3.0. - Parameters
- datasetpyspark.sql.DataFrame
- input dataset. 
- paramMapscollections.abc.Sequence
- A Sequence of param maps. 
 
- dataset
- Returns
- _FitMultipleIterator
- A thread safe iterable which contains one model for each param map. Each call to next(modelIterator) will return (index, model) where model was fit using paramMaps[index]. index values may not be sequential. 
 
 
 - 
getCheckpointInterval() → int¶
- Gets the value of checkpointInterval or its default value. 
 - 
getDocConcentration() → List[float]¶
- Gets the value of - docConcentrationor its default value.- New in version 2.0.0. 
 - 
getFeaturesCol() → str¶
- Gets the value of featuresCol or its default value. 
 - 
getKeepLastCheckpoint() → bool¶
- Gets the value of - keepLastCheckpointor its default value.- New in version 2.0.0. 
 - 
getLearningDecay() → float¶
- Gets the value of - learningDecayor its default value.- New in version 2.0.0. 
 - 
getLearningOffset() → float¶
- Gets the value of - learningOffsetor its default value.- New in version 2.0.0. 
 - 
getMaxIter() → int¶
- Gets the value of maxIter or its default value. 
 - 
getOptimizeDocConcentration() → bool¶
- Gets the value of - optimizeDocConcentrationor its default value.- New in version 2.0.0. 
 - 
getOrDefault(param: Union[str, pyspark.ml.param.Param[T]]) → Union[Any, T]¶
- Gets the value of a param in the user-supplied param map or its default value. Raises an error if neither is set. 
 - 
getParam(paramName: str) → pyspark.ml.param.Param¶
- Gets a param by its name. 
 - 
getSeed() → int¶
- Gets the value of seed or its default value. 
 - 
getSubsamplingRate() → float¶
- Gets the value of - subsamplingRateor its default value.- New in version 2.0.0. 
 - 
getTopicConcentration() → float¶
- Gets the value of - topicConcentrationor its default value.- New in version 2.0.0. 
 - 
getTopicDistributionCol() → str¶
- Gets the value of - topicDistributionColor its default value.- New in version 2.0.0. 
 - 
hasDefault(param: Union[str, pyspark.ml.param.Param[Any]]) → bool¶
- Checks whether a param has a default value. 
 - 
hasParam(paramName: str) → bool¶
- Tests whether this instance contains a param with a given (string) name. 
 - 
isDefined(param: Union[str, pyspark.ml.param.Param[Any]]) → bool¶
- Checks whether a param is explicitly set by user or has a default value. 
 - 
isSet(param: Union[str, pyspark.ml.param.Param[Any]]) → bool¶
- Checks whether a param is explicitly set by user. 
 - 
classmethod load(path: str) → RL¶
- Reads an ML instance from the input path, a shortcut of read().load(path). 
 - 
classmethod read() → pyspark.ml.util.JavaMLReader[RL]¶
- Returns an MLReader instance for this class. 
 - 
save(path: str) → None¶
- Save this ML instance to the given path, a shortcut of ‘write().save(path)’. 
 - 
set(param: pyspark.ml.param.Param, value: Any) → None¶
- Sets a parameter in the embedded param map. 
 - 
setCheckpointInterval(value: int) → pyspark.ml.clustering.LDA[source]¶
- Sets the value of - checkpointInterval.- New in version 2.0.0. 
 - 
setDocConcentration(value: List[float]) → pyspark.ml.clustering.LDA[source]¶
- Sets the value of - docConcentration.- Examples - >>> algo = LDA().setDocConcentration([0.1, 0.2]) >>> algo.getDocConcentration() [0.1..., 0.2...] - New in version 2.0.0. 
 - 
setFeaturesCol(value: str) → pyspark.ml.clustering.LDA[source]¶
- Sets the value of - featuresCol.- New in version 2.0.0. 
 - 
setK(value: int) → pyspark.ml.clustering.LDA[source]¶
- Sets the value of - k.- >>> algo = LDA().setK(10) >>> algo.getK() 10 - New in version 2.0.0. 
 - 
setKeepLastCheckpoint(value: bool) → pyspark.ml.clustering.LDA[source]¶
- Sets the value of - keepLastCheckpoint.- Examples - >>> algo = LDA().setKeepLastCheckpoint(False) >>> algo.getKeepLastCheckpoint() False - New in version 2.0.0. 
 - 
setLearningDecay(value: float) → pyspark.ml.clustering.LDA[source]¶
- Sets the value of - learningDecay.- Examples - >>> algo = LDA().setLearningDecay(0.1) >>> algo.getLearningDecay() 0.1... - New in version 2.0.0. 
 - 
setLearningOffset(value: float) → pyspark.ml.clustering.LDA[source]¶
- Sets the value of - learningOffset.- Examples - >>> algo = LDA().setLearningOffset(100) >>> algo.getLearningOffset() 100.0 - New in version 2.0.0. 
 - 
setMaxIter(value: int) → pyspark.ml.clustering.LDA[source]¶
- Sets the value of - maxIter.- New in version 2.0.0. 
 - 
setOptimizeDocConcentration(value: bool) → pyspark.ml.clustering.LDA[source]¶
- Sets the value of - optimizeDocConcentration.- Examples - >>> algo = LDA().setOptimizeDocConcentration(True) >>> algo.getOptimizeDocConcentration() True - New in version 2.0.0. 
 - 
setOptimizer(value: str) → pyspark.ml.clustering.LDA[source]¶
- Sets the value of - optimizer. Currently only support ‘em’ and ‘online’.- Examples - >>> algo = LDA().setOptimizer("em") >>> algo.getOptimizer() 'em' - New in version 2.0.0. 
 - 
setParams(self, \*, featuresCol="features", maxIter=20, seed=None, checkpointInterval=10, k=10, optimizer="online", learningOffset=1024.0, learningDecay=0.51, subsamplingRate=0.05, optimizeDocConcentration=True, docConcentration=None, topicConcentration=None, topicDistributionCol="topicDistribution", keepLastCheckpoint=True)[source]¶
- Sets params for LDA. - New in version 2.0.0. 
 - 
setSeed(value: int) → pyspark.ml.clustering.LDA[source]¶
- Sets the value of - seed.- New in version 2.0.0. 
 - 
setSubsamplingRate(value: float) → pyspark.ml.clustering.LDA[source]¶
- Sets the value of - subsamplingRate.- Examples - >>> algo = LDA().setSubsamplingRate(0.1) >>> algo.getSubsamplingRate() 0.1... - New in version 2.0.0. 
 - 
setTopicConcentration(value: float) → pyspark.ml.clustering.LDA[source]¶
- Sets the value of - topicConcentration.- Examples - >>> algo = LDA().setTopicConcentration(0.5) >>> algo.getTopicConcentration() 0.5... - New in version 2.0.0. 
 - 
setTopicDistributionCol(value: str) → pyspark.ml.clustering.LDA[source]¶
- Sets the value of - topicDistributionCol.- Examples - >>> algo = LDA().setTopicDistributionCol("topicDistributionCol") >>> algo.getTopicDistributionCol() 'topicDistributionCol' - New in version 2.0.0. 
 - 
write() → pyspark.ml.util.JavaMLWriter¶
- Returns an MLWriter instance for this ML instance. 
 - Attributes Documentation - 
checkpointInterval= Param(parent='undefined', name='checkpointInterval', doc='set checkpoint interval (>= 1) or disable checkpoint (-1). E.g. 10 means that the cache will get checkpointed every 10 iterations. Note: this setting will be ignored if the checkpoint directory is not set in the SparkContext.')¶
 - 
docConcentration= Param(parent='undefined', name='docConcentration', doc='Concentration parameter (commonly named "alpha") for the prior placed on documents\' distributions over topics ("theta").')¶
 - 
featuresCol= Param(parent='undefined', name='featuresCol', doc='features column name.')¶
 - 
k= Param(parent='undefined', name='k', doc='The number of topics (clusters) to infer. Must be > 1.')¶
 - 
keepLastCheckpoint= Param(parent='undefined', name='keepLastCheckpoint', doc='(For EM optimizer) If using checkpointing, this indicates whether to keep the last checkpoint. If false, then the checkpoint will be deleted. Deleting the checkpoint can cause failures if a data partition is lost, so set this bit with care.')¶
 - 
learningDecay= Param(parent='undefined', name='learningDecay', doc='Learning rate, set as anexponential decay rate. This should be between (0.5, 1.0] to guarantee asymptotic convergence.')¶
 - 
learningOffset= Param(parent='undefined', name='learningOffset', doc='A (positive) learning parameter that downweights early iterations. Larger values make early iterations count less')¶
 - 
maxIter= Param(parent='undefined', name='maxIter', doc='max number of iterations (>= 0).')¶
 - 
optimizeDocConcentration= Param(parent='undefined', name='optimizeDocConcentration', doc='Indicates whether the docConcentration (Dirichlet parameter for document-topic distribution) will be optimized during training.')¶
 - 
optimizer= Param(parent='undefined', name='optimizer', doc='Optimizer or inference algorithm used to estimate the LDA model. Supported: online, em')¶
 - 
params¶
- Returns all params ordered by name. The default implementation uses - dir()to get all attributes of type- Param.
 - 
seed= Param(parent='undefined', name='seed', doc='random seed.')¶
 - 
subsamplingRate= Param(parent='undefined', name='subsamplingRate', doc='Fraction of the corpus to be sampled and used in each iteration of mini-batch gradient descent, in range (0, 1].')¶
 - 
topicConcentration= Param(parent='undefined', name='topicConcentration', doc='Concentration parameter (commonly named "beta" or "eta") for the prior placed on topic\' distributions over terms.')¶
 - 
topicDistributionCol= Param(parent='undefined', name='topicDistributionCol', doc='Output column with estimates of the topic mixture distribution for each document (often called "theta" in the literature). Returns a vector of zeros for an empty document.')¶