Skip to content

Commit

Permalink
[SPARK-3909][PySpark][Doc] A corrupted format in Sphinx documents and…
Browse files Browse the repository at this point in the history
… building warnings
  • Loading branch information
cocoatomo committed Oct 11, 2014
1 parent 0e8203f commit 2c7faa8
Show file tree
Hide file tree
Showing 4 changed files with 9 additions and 7 deletions.
2 changes: 1 addition & 1 deletion python/docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -131,7 +131,7 @@
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
#html_static_path = ['_static']

# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
Expand Down
2 changes: 2 additions & 0 deletions python/pyspark/mllib/feature.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,7 @@ def transform(self, word):
"""
:param word: a word
:return: vector representation of word
Transforms a word to its vector representation
Note: local use only
Expand All @@ -57,6 +58,7 @@ def findSynonyms(self, x, num):
:param x: a word or a vector representation of word
:param num: number of synonyms to find
:return: array of (word, cosineSimilarity)
Find synonyms of a word
Note: local use only
Expand Down
2 changes: 1 addition & 1 deletion python/pyspark/rdd.py
Original file line number Diff line number Diff line change
Expand Up @@ -2009,7 +2009,7 @@ def countApproxDistinct(self, relativeSD=0.05):
of The Art Cardinality Estimation Algorithm", available
<a href="http://dx.doi.org/10.1145/2452376.2452456">here</a>.
:param relativeSD Relative accuracy. Smaller values create
:param relativeSD: Relative accuracy. Smaller values create
counters that require more space.
It must be greater than 0.000017.
Expand Down
10 changes: 5 additions & 5 deletions python/pyspark/sql.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,14 +19,14 @@
public classes of Spark SQL:
- L{SQLContext}
Main entry point for SQL functionality.
Main entry point for SQL functionality.
- L{SchemaRDD}
A Resilient Distributed Dataset (RDD) with Schema information for the data contained. In
addition to normal RDD operations, SchemaRDDs also support SQL.
A Resilient Distributed Dataset (RDD) with Schema information for the data contained. In
addition to normal RDD operations, SchemaRDDs also support SQL.
- L{Row}
A Row of data returned by a Spark SQL query.
A Row of data returned by a Spark SQL query.
- L{HiveContext}
Main entry point for accessing data stored in Apache Hive..
Main entry point for accessing data stored in Apache Hive..
"""

import itertools
Expand Down

0 comments on commit 2c7faa8

Please sign in to comment.