PolynomialExpansion¶
- 
class pyspark.ml.feature.PolynomialExpansion(*, degree: int = 2, inputCol: Optional[str] = None, outputCol: Optional[str] = None)[source]¶
- Perform feature expansion in a polynomial space. As said in wikipedia of Polynomial Expansion, “In mathematics, an expansion of a product of sums expresses it as a sum of products by using the fact that multiplication distributes over addition”. Take a 2-variable feature vector as an example: (x, y), if we want to expand it with degree 2, then we get (x, x * x, y, x * y, y * y). - New in version 1.4.0. - Examples - >>> from pyspark.ml.linalg import Vectors >>> df = spark.createDataFrame([(Vectors.dense([0.5, 2.0]),)], ["dense"]) >>> px = PolynomialExpansion(degree=2) >>> px.setInputCol("dense") PolynomialExpansion... >>> px.setOutputCol("expanded") PolynomialExpansion... >>> px.transform(df).head().expanded DenseVector([0.5, 0.25, 2.0, 1.0, 4.0]) >>> px.setParams(outputCol="test").transform(df).head().test DenseVector([0.5, 0.25, 2.0, 1.0, 4.0]) >>> polyExpansionPath = temp_path + "/poly-expansion" >>> px.save(polyExpansionPath) >>> loadedPx = PolynomialExpansion.load(polyExpansionPath) >>> loadedPx.getDegree() == px.getDegree() True >>> loadedPx.transform(df).take(1) == px.transform(df).take(1) True - Methods - clear(param)- Clears a param from the param map if it has been explicitly set. - copy([extra])- Creates a copy of this instance with the same uid and some extra params. - explainParam(param)- Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string. - Returns the documentation of all params with their optionally default values and user-supplied values. - extractParamMap([extra])- Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra. - Gets the value of degree or its default value. - Gets the value of inputCol or its default value. - getOrDefault(param)- Gets the value of a param in the user-supplied param map or its default value. - Gets the value of outputCol or its default value. - getParam(paramName)- Gets a param by its name. - hasDefault(param)- Checks whether a param has a default value. - hasParam(paramName)- Tests whether this instance contains a param with a given (string) name. - isDefined(param)- Checks whether a param is explicitly set by user or has a default value. - isSet(param)- Checks whether a param is explicitly set by user. - load(path)- Reads an ML instance from the input path, a shortcut of read().load(path). - read()- Returns an MLReader instance for this class. - save(path)- Save this ML instance to the given path, a shortcut of ‘write().save(path)’. - set(param, value)- Sets a parameter in the embedded param map. - setDegree(value)- Sets the value of - degree.- setInputCol(value)- Sets the value of - inputCol.- setOutputCol(value)- Sets the value of - outputCol.- setParams(self, \*[, degree, inputCol, …])- Sets params for this PolynomialExpansion. - transform(dataset[, params])- Transforms the input dataset with optional parameters. - write()- Returns an MLWriter instance for this ML instance. - Attributes - Returns all params ordered by name. - Methods Documentation - 
clear(param: pyspark.ml.param.Param) → None¶
- Clears a param from the param map if it has been explicitly set. 
 - 
copy(extra: Optional[ParamMap] = None) → JP¶
- Creates a copy of this instance with the same uid and some extra params. This implementation first calls Params.copy and then make a copy of the companion Java pipeline component with extra params. So both the Python wrapper and the Java pipeline component get copied. - Parameters
- extradict, optional
- Extra parameters to copy to the new instance 
 
- Returns
- JavaParams
- Copy of this instance 
 
 
 - 
explainParam(param: Union[str, pyspark.ml.param.Param]) → str¶
- Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string. 
 - 
explainParams() → str¶
- Returns the documentation of all params with their optionally default values and user-supplied values. 
 - 
extractParamMap(extra: Optional[ParamMap] = None) → ParamMap¶
- Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra. - Parameters
- extradict, optional
- extra param values 
 
- Returns
- dict
- merged param map 
 
 
 - 
getInputCol() → str¶
- Gets the value of inputCol or its default value. 
 - 
getOrDefault(param: Union[str, pyspark.ml.param.Param[T]]) → Union[Any, T]¶
- Gets the value of a param in the user-supplied param map or its default value. Raises an error if neither is set. 
 - 
getOutputCol() → str¶
- Gets the value of outputCol or its default value. 
 - 
getParam(paramName: str) → pyspark.ml.param.Param¶
- Gets a param by its name. 
 - 
hasDefault(param: Union[str, pyspark.ml.param.Param[Any]]) → bool¶
- Checks whether a param has a default value. 
 - 
hasParam(paramName: str) → bool¶
- Tests whether this instance contains a param with a given (string) name. 
 - 
isDefined(param: Union[str, pyspark.ml.param.Param[Any]]) → bool¶
- Checks whether a param is explicitly set by user or has a default value. 
 - 
isSet(param: Union[str, pyspark.ml.param.Param[Any]]) → bool¶
- Checks whether a param is explicitly set by user. 
 - 
classmethod load(path: str) → RL¶
- Reads an ML instance from the input path, a shortcut of read().load(path). 
 - 
classmethod read() → pyspark.ml.util.JavaMLReader[RL]¶
- Returns an MLReader instance for this class. 
 - 
save(path: str) → None¶
- Save this ML instance to the given path, a shortcut of ‘write().save(path)’. 
 - 
set(param: pyspark.ml.param.Param, value: Any) → None¶
- Sets a parameter in the embedded param map. 
 - 
setDegree(value: int) → pyspark.ml.feature.PolynomialExpansion[source]¶
- Sets the value of - degree.- New in version 1.4.0. 
 - 
setInputCol(value: str) → pyspark.ml.feature.PolynomialExpansion[source]¶
- Sets the value of - inputCol.
 - 
setOutputCol(value: str) → pyspark.ml.feature.PolynomialExpansion[source]¶
- Sets the value of - outputCol.
 - 
setParams(self, \*, degree=2, inputCol=None, outputCol=None)[source]¶
- Sets params for this PolynomialExpansion. - New in version 1.4.0. 
 - 
transform(dataset: pyspark.sql.dataframe.DataFrame, params: Optional[ParamMap] = None) → pyspark.sql.dataframe.DataFrame¶
- Transforms the input dataset with optional parameters. - New in version 1.3.0. - Parameters
- datasetpyspark.sql.DataFrame
- input dataset 
- paramsdict, optional
- an optional param map that overrides embedded params. 
 
- dataset
- Returns
- pyspark.sql.DataFrame
- transformed dataset 
 
 
 - 
write() → pyspark.ml.util.JavaMLWriter¶
- Returns an MLWriter instance for this ML instance. 
 - Attributes Documentation - 
degree: pyspark.ml.param.Param[int] = Param(parent='undefined', name='degree', doc='the polynomial degree to expand (>= 1)')¶
 - 
inputCol= Param(parent='undefined', name='inputCol', doc='input column name.')¶
 - 
outputCol= Param(parent='undefined', name='outputCol', doc='output column name.')¶
 - 
params¶
- Returns all params ordered by name. The default implementation uses - dir()to get all attributes of type- Param.
 
-