Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove the sentences about the project being beta #1589

Merged
merged 1 commit into from
Jun 19, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ pandas is the de facto standard (single-node) DataFrame implementation in Python
- Be immediately productive with Spark, with no learning curve, if you are already familiar with pandas.
- Have a single codebase that works both with pandas (tests, smaller datasets) and with Spark (distributed datasets).

This project is currently in beta and is rapidly evolving, with a bi-weekly release cadence. We would love to have you try it and give us feedback, through our [mailing lists](https://groups.google.com/forum/#!forum/koalas-dev) or [GitHub issues](https://github.com/databricks/koalas/issues).
We would love to have you try it and give us feedback, through our [mailing lists](https://groups.google.com/forum/#!forum/koalas-dev) or [GitHub issues](https://github.com/databricks/koalas/issues).

Try the Koalas 10 minutes tutorial on a live Jupyter notebook [here](https://mybinder.org/v2/gh/databricks/koalas/master?filepath=docs%2Fsource%2Fgetting_started%2F10min.ipynb). The initial launch can take up to several minutes.

Expand Down
2 changes: 1 addition & 1 deletion docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ With this package, you can:
* Be immediately productive with Spark, with no learning curve, if you are already familiar with pandas.
* Have a single codebase that works both with pandas (tests, smaller datasets) and with Spark (distributed datasets).

This project is currently in beta and is rapidly evolving, with a bi-weekly release cadence. We would love to have you try it and give us feedback,
We would love to have you try it and give us feedback,
through our `mailing lists <https://groups.google.com/forum/#!forum/koalas-dev>`_ or `GitHub issues <https://github.com/databricks/koalas/issues>`_.
Try the Koalas 10 minutes tutorial on a live Jupyter notebook `here <https://mybinder.org/v2/gh/databricks/koalas/master?filepath=docs%2Fsource%2Fgetting_started%2F10min.ipynb>`_.
The initial launch can take up to several minutes.
Expand Down
13 changes: 7 additions & 6 deletions docs/source/user_guide/faq.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,18 +5,19 @@ FAQ
What's the project's status?
----------------------------

This project is currently in beta and is rapidly evolving.
We plan to do bi-weekly releases at this stage.
You should expect the following differences:
Koalas 1.0.0 was released, and it is much more stable now.
You might still face the following differences:

- some functions may be missing. Please create a GitHub issue if your favorite function is not yet supported. We also document all the functions that are not yet supported in the `missing directory <https://github.com/databricks/koalas/tree/master/databricks/koalas/missing>`_.
- Most of pandas-equivalent APIs are implemented but still some may be missing.
Please create a GitHub issue if your favorite function is not yet supported.
We also document all APIs that are not yet supported in the `missing directory <https://github.com/databricks/koalas/tree/master/databricks/koalas/missing>`_.

- some behavior may be different, in particular in the treatment of nulls: Pandas uses
- Some behaviors may be different, in particular in the treatment of nulls: Pandas uses
Not a Number (NaN) special constants to indicate missing values, while Spark has a
special flag on each value to indicate missing values. We would love to hear from you
if you come across any discrepancies

- because Spark is lazy in nature, some operations like creating new columns only get
- Because Spark is lazy in nature, some operations like creating new columns only get
performed when Spark needs to print or write the dataframe.

Is it Koalas or koalas?
Expand Down