pyspark.sql.functions.nanvl¶
- 
pyspark.sql.functions.nanvl(col1: ColumnOrName, col2: ColumnOrName) → pyspark.sql.column.Column[source]¶
- Returns col1 if it is not NaN, or col2 if col1 is NaN. - Both inputs should be floating point columns ( - DoubleTypeor- FloatType).- New in version 1.6.0. - Changed in version 3.4.0: Supports Spark Connect. - Parameters
- Returns
- Column
- value from first column or second if first is NaN . 
 
 - Examples - >>> df = spark.createDataFrame([(1.0, float('nan')), (float('nan'), 2.0)], ("a", "b")) >>> df.select(nanvl("a", "b").alias("r1"), nanvl(df.a, df.b).alias("r2")).collect() [Row(r1=1.0, r2=1.0), Row(r1=2.0, r2=2.0)]