Skip to content

Commit

Permalink
[MINOR][SQL] Update the description of spark.sql.files.ignoreCorruptF…
Browse files Browse the repository at this point in the history
…iles and spark.sql.columnNameOfCorruptRecord

### What changes were proposed in this pull request?
1. The description of `spark.sql.files.ignoreCorruptFiles` is not accurate. When the file does not exist, we will issue the error message.
```
org.apache.spark.sql.AnalysisException: Path does not exist: file:/nonexist/path;
```

2. `spark.sql.columnNameOfCorruptRecord` also affects the CSV format. The current description only mentions JSON format.

### How was this patch tested?
N/A

Author: Xiao Li <gatorsmile@gmail.com>

Closes #18184 from gatorsmile/updateMessage.
  • Loading branch information
gatorsmile authored and cloud-fan committed Jun 2, 2017
1 parent 16186cd commit 2a780ac
Showing 1 changed file with 3 additions and 3 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -345,7 +345,8 @@ object SQLConf {
.createWithDefault(true)

val COLUMN_NAME_OF_CORRUPT_RECORD = buildConf("spark.sql.columnNameOfCorruptRecord")
.doc("The name of internal column for storing raw/un-parsed JSON records that fail to parse.")
.doc("The name of internal column for storing raw/un-parsed JSON and CSV records that fail " +
"to parse.")
.stringConf
.createWithDefault("_corrupt_record")

Expand Down Expand Up @@ -535,8 +536,7 @@ object SQLConf {

val IGNORE_CORRUPT_FILES = buildConf("spark.sql.files.ignoreCorruptFiles")
.doc("Whether to ignore corrupt files. If true, the Spark jobs will continue to run when " +
"encountering corrupted or non-existing and contents that have been read will still be " +
"returned.")
"encountering corrupted files and the contents that have been read will still be returned.")
.booleanConf
.createWithDefault(false)

Expand Down

0 comments on commit 2a780ac

Please sign in to comment.