pyspark.sql.DataFrameReader.csv¶
- 
DataFrameReader.csv(path: Union[str, List[str]], schema: Union[pyspark.sql.types.StructType, str, None] = None, sep: Optional[str] = None, encoding: Optional[str] = None, quote: Optional[str] = None, escape: Optional[str] = None, comment: Optional[str] = None, header: Union[bool, str, None] = None, inferSchema: Union[bool, str, None] = None, ignoreLeadingWhiteSpace: Union[bool, str, None] = None, ignoreTrailingWhiteSpace: Union[bool, str, None] = None, nullValue: Optional[str] = None, nanValue: Optional[str] = None, positiveInf: Optional[str] = None, negativeInf: Optional[str] = None, dateFormat: Optional[str] = None, timestampFormat: Optional[str] = None, maxColumns: Union[str, int, None] = None, maxCharsPerColumn: Union[str, int, None] = None, maxMalformedLogPerPartition: Union[str, int, None] = None, mode: Optional[str] = None, columnNameOfCorruptRecord: Optional[str] = None, multiLine: Union[bool, str, None] = None, charToEscapeQuoteEscaping: Optional[str] = None, samplingRatio: Union[str, float, None] = None, enforceSchema: Union[bool, str, None] = None, emptyValue: Optional[str] = None, locale: Optional[str] = None, lineSep: Optional[str] = None, pathGlobFilter: Union[bool, str, None] = None, recursiveFileLookup: Union[bool, str, None] = None, modifiedBefore: Union[bool, str, None] = None, modifiedAfter: Union[bool, str, None] = None, unescapedQuoteHandling: Optional[str] = None) → DataFrame[source]¶
- Loads a CSV file and returns the result as a - DataFrame.- This function will go through the input once to determine the input schema if - inferSchemais enabled. To avoid going through the entire data once, disable- inferSchemaoption or specify the schema explicitly using- schema.- New in version 2.0.0. - Changed in version 3.4.0: Supports Spark Connect. - Parameters
- pathstr or list
- string, or list of strings, for input path(s), or RDD of Strings storing CSV rows. 
- schemapyspark.sql.types.StructTypeor str, optional
- an optional - pyspark.sql.types.StructTypefor the input schema or a DDL-formatted string (For example- col0 INT, col1 DOUBLE).
 
- Other Parameters
- Extra options
- For the extra options, refer to Data Source Option for the version you use. 
 
 - Examples - Write a DataFrame into a CSV file and read it back. - >>> import tempfile >>> with tempfile.TemporaryDirectory() as d: ... # Write a DataFrame into a CSV file ... df = spark.createDataFrame([{"age": 100, "name": "Hyukjin Kwon"}]) ... df.write.mode("overwrite").format("csv").save(d) ... ... # Read the CSV file as a DataFrame with 'nullValue' option set to 'Hyukjin Kwon'. ... spark.read.csv(d, schema=df.schema, nullValue="Hyukjin Kwon").show() +---+----+ |age|name| +---+----+ |100|NULL| +---+----+