Skip to content

Commit

Permalink
Fixed whatsnew entry
Browse files Browse the repository at this point in the history
  • Loading branch information
hwalinga committed Feb 15, 2019
2 parents 9e4e049 + 83fe6ca commit 2f0b462
Show file tree
Hide file tree
Showing 245 changed files with 7,253 additions and 9,024 deletions.
10 changes: 5 additions & 5 deletions .github/CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,16 +8,16 @@ Our main contributing guide can be found [in this repo](https://github.com/panda

If you are looking to contribute to the *pandas* codebase, the best place to start is the [GitHub "issues" tab](https://github.com/pandas-dev/pandas/issues). This is also a great place for filing bug reports and making suggestions for ways in which we can improve the code and documentation.

If you have additional questions, feel free to ask them on the [mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata) or on [Gitter](https://gitter.im/pydata/pandas). Further information can also be found in the "[Where to start?](https://github.com/pandas-dev/pandas/blob/master/doc/source/contributing.rst#where-to-start)" section.
If you have additional questions, feel free to ask them on the [mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata) or on [Gitter](https://gitter.im/pydata/pandas). Further information can also be found in the "[Where to start?](https://github.com/pandas-dev/pandas/blob/master/doc/source/development/contributing.rst#where-to-start)" section.

## Filing Issues

If you notice a bug in the code or documentation, or have suggestions for how we can improve either, feel free to create an issue on the [GitHub "issues" tab](https://github.com/pandas-dev/pandas/issues) using [GitHub's "issue" form](https://github.com/pandas-dev/pandas/issues/new). The form contains some questions that will help us best address your issue. For more information regarding how to file issues against *pandas*, please refer to the "[Bug reports and enhancement requests](https://github.com/pandas-dev/pandas/blob/master/doc/source/contributing.rst#bug-reports-and-enhancement-requests)" section.
If you notice a bug in the code or documentation, or have suggestions for how we can improve either, feel free to create an issue on the [GitHub "issues" tab](https://github.com/pandas-dev/pandas/issues) using [GitHub's "issue" form](https://github.com/pandas-dev/pandas/issues/new). The form contains some questions that will help us best address your issue. For more information regarding how to file issues against *pandas*, please refer to the "[Bug reports and enhancement requests](https://github.com/pandas-dev/pandas/blob/master/doc/source/development/contributing.rst#bug-reports-and-enhancement-requests)" section.

## Contributing to the Codebase

The code is hosted on [GitHub](https://www.github.com/pandas-dev/pandas), so you will need to use [Git](http://git-scm.com/) to clone the project and make changes to the codebase. Once you have obtained a copy of the code, you should create a development environment that is separate from your existing Python environment so that you can make and test changes without compromising your own work environment. For more information, please refer to the "[Working with the code](https://github.com/pandas-dev/pandas/blob/master/doc/source/contributing.rst#working-with-the-code)" section.
The code is hosted on [GitHub](https://www.github.com/pandas-dev/pandas), so you will need to use [Git](http://git-scm.com/) to clone the project and make changes to the codebase. Once you have obtained a copy of the code, you should create a development environment that is separate from your existing Python environment so that you can make and test changes without compromising your own work environment. For more information, please refer to the "[Working with the code](https://github.com/pandas-dev/pandas/blob/master/doc/source/development/contributing.rst#working-with-the-code)" section.

Before submitting your changes for review, make sure to check that your changes do not break any tests. You can find more information about our test suites in the "[Test-driven development/code writing](https://github.com/pandas-dev/pandas/blob/master/doc/source/contributing.rst#test-driven-development-code-writing)" section. We also have guidelines regarding coding style that will be enforced during testing, which can be found in the "[Code standards](https://github.com/pandas-dev/pandas/blob/master/doc/source/contributing.rst#code-standards)" section.
Before submitting your changes for review, make sure to check that your changes do not break any tests. You can find more information about our test suites in the "[Test-driven development/code writing](https://github.com/pandas-dev/pandas/blob/master/doc/source/contributing.rst#test-driven-development-code-writing)" section. We also have guidelines regarding coding style that will be enforced during testing, which can be found in the "[Code standards](https://github.com/pandas-dev/pandas/blob/master/doc/source/development/contributing.rst#code-standards)" section.

Once your changes are ready to be submitted, make sure to push your changes to GitHub before creating a pull request. Details about how to do that can be found in the "[Contributing your changes to pandas](https://github.com/pandas-dev/pandas/blob/master/doc/source/contributing.rst#contributing-your-changes-to-pandas)" section. We will review your changes, and you will most likely be asked to make additional changes before it is finally ready to merge. However, once it's ready, we will merge it, and you will have successfully contributed to the codebase!
Once your changes are ready to be submitted, make sure to push your changes to GitHub before creating a pull request. Details about how to do that can be found in the "[Contributing your changes to pandas](https://github.com/pandas-dev/pandas/blob/master/doc/source/development/contributing.rst#contributing-your-changes-to-pandas)" section. We will review your changes, and you will most likely be asked to make additional changes before it is finally ready to merge. However, once it's ready, we will merge it, and you will have successfully contributed to the codebase!
1 change: 0 additions & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -23,4 +23,3 @@ doc:
cd doc; \
python make.py clean; \
python make.py html
python make.py spellcheck
1 change: 1 addition & 0 deletions asv_bench/benchmarks/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
"""Pandas benchmarks."""
3 changes: 1 addition & 2 deletions asv_bench/benchmarks/algorithms.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,6 @@
import pandas as pd
from pandas.util import testing as tm


for imp in ['pandas.util', 'pandas.tools.hashing']:
try:
hashing = import_module(imp)
Expand Down Expand Up @@ -142,4 +141,4 @@ def time_quantile(self, quantile, interpolation, dtype):
self.idx.quantile(quantile, interpolation=interpolation)


from .pandas_vb_common import setup # noqa: F401
from .pandas_vb_common import setup # noqa: F401 isort:skip
29 changes: 1 addition & 28 deletions asv_bench/benchmarks/io/hdf.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,5 @@
import warnings

import numpy as np
from pandas import DataFrame, Panel, date_range, HDFStore, read_hdf
from pandas import DataFrame, date_range, HDFStore, read_hdf
import pandas.util.testing as tm

from ..pandas_vb_common import BaseIO
Expand Down Expand Up @@ -99,31 +97,6 @@ def time_store_info(self):
self.store.info()


class HDFStorePanel(BaseIO):

def setup(self):
self.fname = '__test__.h5'
with warnings.catch_warnings(record=True):
self.p = Panel(np.random.randn(20, 1000, 25),
items=['Item%03d' % i for i in range(20)],
major_axis=date_range('1/1/2000', periods=1000),
minor_axis=['E%03d' % i for i in range(25)])
self.store = HDFStore(self.fname)
self.store.append('p1', self.p)

def teardown(self):
self.store.close()
self.remove(self.fname)

def time_read_store_table_panel(self):
with warnings.catch_warnings(record=True):
self.store.select('p1')

def time_write_store_table_panel(self):
with warnings.catch_warnings(record=True):
self.store.append('p2', self.p)


class HDF(BaseIO):

params = ['table', 'fixed']
Expand Down
4 changes: 2 additions & 2 deletions asv_bench/benchmarks/strings.py
Original file line number Diff line number Diff line change
Expand Up @@ -102,10 +102,10 @@ def setup(self, repeats):
N = 10**5
self.s = Series(tm.makeStringIndex(N))
repeat = {'int': 1, 'array': np.random.randint(1, 3, N)}
self.repeat = repeat[repeats]
self.values = repeat[repeats]

def time_repeat(self, repeats):
self.s.str.repeat(self.repeat)
self.s.str.repeat(self.values)


class Cat(object):
Expand Down
13 changes: 7 additions & 6 deletions ci/code_checks.sh
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,7 @@ if [[ -z "$CHECK" || "$CHECK" == "lint" ]]; then
# this particular codebase (e.g. src/headers, src/klib, src/msgpack). However,
# we can lint all header files since they aren't "generated" like C files are.
MSG='Linting .c and .h' ; echo $MSG
cpplint --quiet --extensions=c,h --headers=h --recursive --filter=-readability/casting,-runtime/int,-build/include_subdir pandas/_libs/src/*.h pandas/_libs/src/parser pandas/_libs/ujson pandas/_libs/tslibs/src/datetime
cpplint --quiet --extensions=c,h --headers=h --recursive --filter=-readability/casting,-runtime/int,-build/include_subdir pandas/_libs/src/*.h pandas/_libs/src/parser pandas/_libs/ujson pandas/_libs/tslibs/src/datetime pandas/io/msgpack pandas/_libs/*.cpp pandas/util
RET=$(($RET + $?)) ; echo $MSG "DONE"

echo "isort --version-number"
Expand Down Expand Up @@ -174,9 +174,10 @@ if [[ -z "$CHECK" || "$CHECK" == "patterns" ]]; then
MSG='Check that no file in the repo contains tailing whitespaces' ; echo $MSG
set -o pipefail
if [[ "$AZURE" == "true" ]]; then
! grep -n --exclude="*.svg" -RI "\s$" * | awk -F ":" '{print "##vso[task.logissue type=error;sourcepath=" $1 ";linenumber=" $2 ";] Tailing whitespaces found: " $3}'
# we exclude all c/cpp files as the c/cpp files of pandas code base are tested when Linting .c and .h files
! grep -n '--exclude=*.'{svg,c,cpp,html} -RI "\s$" * | awk -F ":" '{print "##vso[task.logissue type=error;sourcepath=" $1 ";linenumber=" $2 ";] Tailing whitespaces found: " $3}'
else
! grep -n --exclude="*.svg" -RI "\s$" * | awk -F ":" '{print $1 ":" $2 ":Tailing whitespaces found: " $3}'
! grep -n '--exclude=*.'{svg,c,cpp,html} -RI "\s$" * | awk -F ":" '{print $1 ":" $2 ":Tailing whitespaces found: " $3}'
fi
RET=$(($RET + $?)) ; echo $MSG "DONE"
fi
Expand Down Expand Up @@ -206,7 +207,7 @@ if [[ -z "$CHECK" || "$CHECK" == "doctests" ]]; then

MSG='Doctests frame.py' ; echo $MSG
pytest -q --doctest-modules pandas/core/frame.py \
-k"-axes -combine -itertuples -join -pivot_table -query -reindex -reindex_axis -round"
-k" -itertuples -join -reindex -reindex_axis -round"
RET=$(($RET + $?)) ; echo $MSG "DONE"

MSG='Doctests series.py' ; echo $MSG
Expand Down Expand Up @@ -240,8 +241,8 @@ fi
### DOCSTRINGS ###
if [[ -z "$CHECK" || "$CHECK" == "docstrings" ]]; then

MSG='Validate docstrings (GL06, GL07, GL09, SS04, PR03, PR05, EX04)' ; echo $MSG
$BASE_DIR/scripts/validate_docstrings.py --format=azure --errors=GL06,GL07,GL09,SS04,PR03,PR05,EX04
MSG='Validate docstrings (GL06, GL07, GL09, SS04, PR03, PR05, PR10, EX04, RT04, RT05, SS05, SA05)' ; echo $MSG
$BASE_DIR/scripts/validate_docstrings.py --format=azure --errors=GL06,GL07,GL09,SS04,PR03,PR04,PR05,EX04,RT04,RT05,SS05,SA05
RET=$(($RET + $?)) ; echo $MSG "DONE"

fi
Expand Down
Binary file modified doc/cheatsheet/Pandas_Cheat_Sheet.pdf
Binary file not shown.
Binary file modified doc/cheatsheet/Pandas_Cheat_Sheet.pptx
Binary file not shown.
Binary file modified doc/cheatsheet/Pandas_Cheat_Sheet_JA.pdf
Binary file not shown.
Binary file modified doc/cheatsheet/Pandas_Cheat_Sheet_JA.pptx
Binary file not shown.
4 changes: 2 additions & 2 deletions doc/source/user_guide/groupby.rst
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ steps:

Out of these, the split step is the most straightforward. In fact, in many
situations we may wish to split the data set into groups and do something with
those groups. In the apply step, we might wish to one of the
those groups. In the apply step, we might wish to do one of the
following:

* **Aggregation**: compute a summary statistic (or statistics) for each
Expand Down Expand Up @@ -1317,7 +1317,7 @@ arbitrary function, for example:
df.groupby(['Store', 'Product']).pipe(mean)
where ``mean`` takes a GroupBy object and finds the mean of the Revenue and Quantity
columns repectively for each Store-Product combination. The ``mean`` function can
columns respectively for each Store-Product combination. The ``mean`` function can
be any function that takes in a GroupBy object; the ``.pipe`` will pass the GroupBy
object as a parameter into the function you specify.

Expand Down
30 changes: 30 additions & 0 deletions doc/source/user_guide/io.rst
Original file line number Diff line number Diff line change
Expand Up @@ -989,6 +989,36 @@ a single date rather than the entire array.
os.remove('tmp.csv')
.. _io.csv.mixed_timezones:

Parsing a CSV with mixed Timezones
++++++++++++++++++++++++++++++++++

Pandas cannot natively represent a column or index with mixed timezones. If your CSV
file contains columns with a mixture of timezones, the default result will be
an object-dtype column with strings, even with ``parse_dates``.


.. ipython:: python
content = """\
a
2000-01-01T00:00:00+05:00
2000-01-01T00:00:00+06:00"""
df = pd.read_csv(StringIO(content), parse_dates=['a'])
df['a']
To parse the mixed-timezone values as a datetime column, pass a partially-applied
:func:`to_datetime` with ``utc=True`` as the ``date_parser``.

.. ipython:: python
df = pd.read_csv(StringIO(content), parse_dates=['a'],
date_parser=lambda col: pd.to_datetime(col, utc=True))
df['a']
.. _io.dayfirst:


Expand Down
Loading

0 comments on commit 2f0b462

Please sign in to comment.