Skip to content

Commit

Permalink
fix sql and streaming doc warnings
Browse files Browse the repository at this point in the history
  • Loading branch information
mengxr committed May 17, 2015
1 parent 2b4371e commit e3f83fe
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 1 deletion.
1 change: 1 addition & 0 deletions python/pyspark/sql/dataframe.py
Original file line number Diff line number Diff line change
Expand Up @@ -943,6 +943,7 @@ def replace(self, to_replace, value, subset=None):
Columns specified in subset that do not have matching data type are ignored.
For example, if `value` is a string, and subset contains a non-string column,
then the non-string column is simply ignored.
>>> df4.replace(10, 20).show()
+----+------+-----+
| age|height| name|
Expand Down
3 changes: 2 additions & 1 deletion python/pyspark/streaming/kafka.py
Original file line number Diff line number Diff line change
Expand Up @@ -132,11 +132,12 @@ def createRDD(sc, kafkaParams, offsetRanges, leaders={},
.. note:: Experimental
Create a RDD from Kafka using offset ranges for each topic and partition.
:param sc: SparkContext object
:param kafkaParams: Additional params for Kafka
:param offsetRanges: list of offsetRange to specify topic:partition:[start, end) to consume
:param leaders: Kafka brokers for each TopicAndPartition in offsetRanges. May be an empty
map, in which case leaders will be looked up on the driver.
map, in which case leaders will be looked up on the driver.
:param keyDecoder: A function used to decode key (default is utf8_decoder)
:param valueDecoder: A function used to decode value (default is utf8_decoder)
:return: A RDD object
Expand Down

0 comments on commit e3f83fe

Please sign in to comment.