Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DEPR: deprecate strings T, S, L, U, and N in offsets frequencies, resolution abbreviations, _attrname_to_abbrevs #54061

Merged
Show file tree
Hide file tree
Changes from 31 commits
Commits
Show all changes
47 commits
Select commit Hold shift + click to select a range
6e3e96e
DEPR: deprecate codes T and L in to_abbrevs/_abbrev_to_attrnames
natmokval Jul 10, 2023
fe88663
replace T/L with min/ms in _prefix, period_code_map, _attrname_to_abb…
natmokval Jul 12, 2023
1e79480
correct def get_freq for tseries, fix tests
natmokval Jul 14, 2023
f7fd2a1
replace T, L in _offset_to_period_map, is_subperiod, is_superperiod, …
natmokval Jul 14, 2023
98e8a39
correct def to_timedelta, def _round_temporally and fix tests
natmokval Jul 17, 2023
93bbd08
correct def resolution_string, get_reso_from_freqstr and fix tests
natmokval Jul 21, 2023
b0dfd2f
Merge branch 'main' into DEPR-codes-T-L-from-_attrname_to_abbrevs-_ab…
natmokval Jul 21, 2023
51c62a1
fix tests
natmokval Jul 21, 2023
898f811
correct def _maybe_coerce_freq , is_subperiod, is_superperiod, and _o…
natmokval Jul 23, 2023
1d30c07
fix a test for plotting
natmokval Jul 25, 2023
37de346
Merge branch 'main' into DEPR-codes-T-L-from-_attrname_to_abbrevs-_ab…
natmokval Jul 25, 2023
7171237
fix tests
natmokval Jul 25, 2023
29dcd8d
fix failures in asv benchmarks
natmokval Jul 25, 2023
317718a
correct docstrings
natmokval Jul 27, 2023
99a0cf9
deprecate abbrevs U, N add dict depr_abbrevs and fix tests
natmokval Jul 28, 2023
a13a041
correct get_freq and fix tests
natmokval Jul 31, 2023
a949282
Merge branch 'main' into DEPR-codes-T-L-from-_attrname_to_abbrevs-_ab…
natmokval Jul 31, 2023
7bd6188
correct is_superperiod, is_subperiod, _maybe_coerce_freq and fix tests
natmokval Jul 31, 2023
77949c4
correct __eq__ for PeriodDtype
natmokval Aug 1, 2023
a24c0ec
update docstrings
natmokval Aug 1, 2023
733d68b
correct whatsnew and user_guide
natmokval Aug 1, 2023
beeac14
correct tables of Offset/Period aliases in user_guide
natmokval Aug 1, 2023
a3e3522
correct warning message, add the warning to some tests
natmokval Aug 1, 2023
65ddf90
resolve conflicts in tests
natmokval Aug 1, 2023
c61b0fb
add the futurewarning to def asfreq, fix tests
natmokval Aug 1, 2023
c2f45ba
add the futurewarning to to_offset, correct warning message and add t…
natmokval Aug 2, 2023
73405bf
add the warning to parse_timedelta_unit, remove t, l, u, n from timed…
natmokval Aug 2, 2023
b2ab238
correct docstrings, update user_guide for timeseries and add tests
natmokval Aug 2, 2023
4775471
update whatsnew/v2.1.0.rst
natmokval Aug 2, 2023
c3ed691
remove warning from to_timedelta, correct tests
natmokval Aug 3, 2023
155b0a7
Merge branch 'main' into DEPR-codes-T-L-from-_attrname_to_abbrevs-_ab…
natmokval Aug 7, 2023
31d292c
deprecate 'S' in favour of 's', fix tests
natmokval Aug 9, 2023
d5dabd0
fix tests
natmokval Aug 9, 2023
9cf0565
correct parse_iso_format_string, fix tests
natmokval Aug 13, 2023
609646e
correct docs
natmokval Aug 17, 2023
cc04261
correct docs
natmokval Aug 17, 2023
93533d9
correct docstrings in PeriodProperties
natmokval Aug 17, 2023
9ba1734
Merge branch 'main' into DEPR-codes-T-L-from-_attrname_to_abbrevs-_ab…
natmokval Aug 17, 2023
b79e9b6
correct docs, tests, and add lines to whatsnew/v2.2.0.rst
natmokval Aug 17, 2023
3408d0b
resolve conflict
natmokval Aug 17, 2023
12888f8
correct examples in docs
natmokval Aug 17, 2023
cdd5f6b
resolve conflict
natmokval Aug 18, 2023
c7b8b24
resolve conflict
natmokval Aug 21, 2023
5bb2ca8
correct v2.2.0.rst and test_subset
natmokval Aug 22, 2023
c54e431
resolve conflict in v2.2.0.rst
natmokval Aug 22, 2023
0966d2f
resolve conflict
natmokval Aug 22, 2023
271bd6b
resolve conflict v2.2.0.rst
natmokval Aug 22, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions asv_bench/benchmarks/arithmetic.py
Original file line number Diff line number Diff line change
Expand Up @@ -262,7 +262,7 @@ class Timeseries:
def setup(self, tz):
N = 10**6
halfway = (N // 2) - 1
self.s = Series(date_range("20010101", periods=N, freq="T", tz=tz))
self.s = Series(date_range("20010101", periods=N, freq="min", tz=tz))
self.ts = self.s[halfway]

self.s2 = Series(date_range("20010101", periods=N, freq="s", tz=tz))
Expand Down Expand Up @@ -460,7 +460,7 @@ class OffsetArrayArithmetic:

def setup(self, offset):
N = 10000
rng = date_range(start="1/1/2000", periods=N, freq="T")
rng = date_range(start="1/1/2000", periods=N, freq="min")
self.rng = rng
self.ser = Series(rng)

Expand All @@ -479,7 +479,7 @@ class ApplyIndex:

def setup(self, offset):
N = 10000
rng = date_range(start="1/1/2000", periods=N, freq="T")
rng = date_range(start="1/1/2000", periods=N, freq="min")
self.rng = rng

def time_apply_index(self, offset):
Expand Down
2 changes: 1 addition & 1 deletion asv_bench/benchmarks/eval.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ class Query:
def setup(self):
N = 10**6
halfway = (N // 2) - 1
index = pd.date_range("20010101", periods=N, freq="T")
index = pd.date_range("20010101", periods=N, freq="min")
s = pd.Series(index)
self.ts = s.iloc[halfway]
self.df = pd.DataFrame({"a": np.random.randn(N), "dates": index}, index=index)
Expand Down
2 changes: 1 addition & 1 deletion asv_bench/benchmarks/gil.py
Original file line number Diff line number Diff line change
Expand Up @@ -178,7 +178,7 @@ def time_kth_smallest(self):
class ParallelDatetimeFields:
def setup(self):
N = 10**6
self.dti = date_range("1900-01-01", periods=N, freq="T")
self.dti = date_range("1900-01-01", periods=N, freq="min")
self.period = self.dti.to_period("D")

def time_datetime_field_year(self):
Expand Down
6 changes: 3 additions & 3 deletions asv_bench/benchmarks/index_cached_properties.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,14 +25,14 @@ def setup(self, index_type):
N = 10**5
if index_type == "MultiIndex":
self.idx = pd.MultiIndex.from_product(
[pd.date_range("1/1/2000", freq="T", periods=N // 2), ["a", "b"]]
[pd.date_range("1/1/2000", freq="min", periods=N // 2), ["a", "b"]]
)
elif index_type == "DatetimeIndex":
self.idx = pd.date_range("1/1/2000", freq="T", periods=N)
self.idx = pd.date_range("1/1/2000", freq="min", periods=N)
elif index_type == "Int64Index":
self.idx = pd.Index(range(N), dtype="int64")
elif index_type == "PeriodIndex":
self.idx = pd.period_range("1/1/2000", freq="T", periods=N)
self.idx = pd.period_range("1/1/2000", freq="min", periods=N)
elif index_type == "RangeIndex":
self.idx = pd.RangeIndex(start=0, stop=N)
elif index_type == "IntervalIndex":
Expand Down
2 changes: 1 addition & 1 deletion asv_bench/benchmarks/index_object.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ class SetOperations:

def setup(self, index_structure, dtype, method):
N = 10**5
dates_left = date_range("1/1/2000", periods=N, freq="T")
dates_left = date_range("1/1/2000", periods=N, freq="min")
fmt = "%Y-%m-%d %H:%M:%S"
date_str_left = Index(dates_left.strftime(fmt))
int_left = Index(np.arange(N))
Expand Down
2 changes: 1 addition & 1 deletion asv_bench/benchmarks/io/json.py
Original file line number Diff line number Diff line change
Expand Up @@ -290,7 +290,7 @@ def time_float_longint_str_lines(self):
class ToJSONMem:
def setup_cache(self):
df = DataFrame([[1]])
df2 = DataFrame(range(8), date_range("1/1/2000", periods=8, freq="T"))
df2 = DataFrame(range(8), date_range("1/1/2000", periods=8, freq="min"))
frames = {"int": df, "float": df.astype(float), "datetime": df2}

return frames
Expand Down
4 changes: 2 additions & 2 deletions asv_bench/benchmarks/join_merge.py
Original file line number Diff line number Diff line change
Expand Up @@ -212,7 +212,7 @@ class JoinNonUnique:
# outer join of non-unique
# GH 6329
def setup(self):
date_index = date_range("01-Jan-2013", "23-Jan-2013", freq="T")
date_index = date_range("01-Jan-2013", "23-Jan-2013", freq="min")
daily_dates = date_index.to_period("D").to_timestamp("S", "S")
self.fracofday = date_index.values - daily_dates.values
self.fracofday = self.fracofday.astype("timedelta64[ns]")
Expand Down Expand Up @@ -338,7 +338,7 @@ class MergeDatetime:
def setup(self, units, tz):
unit_left, unit_right = units
N = 10_000
keys = Series(date_range("2012-01-01", freq="T", periods=N, tz=tz))
keys = Series(date_range("2012-01-01", freq="min", periods=N, tz=tz))
self.left = DataFrame(
{
"key": keys.sample(N * 10, replace=True).dt.as_unit(unit_left),
Expand Down
2 changes: 1 addition & 1 deletion asv_bench/benchmarks/sparse.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ class SparseSeriesToFrame:
def setup(self):
K = 50
N = 50001
rng = date_range("1/1/2000", periods=N, freq="T")
rng = date_range("1/1/2000", periods=N, freq="min")
self.series = {}
for i in range(1, K):
data = np.random.randn(N)[:-i]
Expand Down
16 changes: 8 additions & 8 deletions asv_bench/benchmarks/timeseries.py
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,7 @@ def time_infer_freq(self, freq):
class TimeDatetimeConverter:
def setup(self):
N = 100000
self.rng = date_range(start="1/1/2000", periods=N, freq="T")
self.rng = date_range(start="1/1/2000", periods=N, freq="min")

def time_convert(self):
DatetimeConverter.convert(self.rng, None, None)
Expand All @@ -129,9 +129,9 @@ class Iteration:
def setup(self, time_index):
N = 10**6
if time_index is timedelta_range:
self.idx = time_index(start=0, freq="T", periods=N)
self.idx = time_index(start=0, freq="min", periods=N)
else:
self.idx = time_index(start="20140101", freq="T", periods=N)
self.idx = time_index(start="20140101", freq="min", periods=N)
self.exit = 10000

def time_iter(self, time_index):
Expand All @@ -149,7 +149,7 @@ class ResampleDataFrame:
param_names = ["method"]

def setup(self, method):
rng = date_range(start="20130101", periods=100000, freq="50L")
rng = date_range(start="20130101", periods=100000, freq="50ms")
df = DataFrame(np.random.randn(100000, 2), index=rng)
self.resample = getattr(df.resample("1s"), method)

Expand All @@ -163,8 +163,8 @@ class ResampleSeries:

def setup(self, index, freq, method):
indexes = {
"period": period_range(start="1/1/2000", end="1/1/2001", freq="T"),
"datetime": date_range(start="1/1/2000", end="1/1/2001", freq="T"),
"period": period_range(start="1/1/2000", end="1/1/2001", freq="min"),
"datetime": date_range(start="1/1/2000", end="1/1/2001", freq="min"),
}
idx = indexes[index]
ts = Series(np.random.randn(len(idx)), index=idx)
Expand All @@ -178,7 +178,7 @@ class ResampleDatetetime64:
# GH 7754
def setup(self):
rng3 = date_range(
start="2000-01-01 00:00:00", end="2000-01-01 10:00:00", freq="555000U"
start="2000-01-01 00:00:00", end="2000-01-01 10:00:00", freq="555000us"
)
self.dt_ts = Series(5, rng3, dtype="datetime64[ns]")

Expand Down Expand Up @@ -270,7 +270,7 @@ class DatetimeAccessor:

def setup(self, tz):
N = 100000
self.series = Series(date_range(start="1/1/2000", periods=N, freq="T", tz=tz))
self.series = Series(date_range(start="1/1/2000", periods=N, freq="min", tz=tz))

def time_dt_accessor(self, tz):
self.series.dt
Expand Down
4 changes: 2 additions & 2 deletions asv_bench/benchmarks/tslibs/timestamp.py
Original file line number Diff line number Diff line change
Expand Up @@ -136,10 +136,10 @@ def time_to_julian_date(self, tz):
self.ts.to_julian_date()

def time_floor(self, tz):
self.ts.floor("5T")
self.ts.floor("5min")

def time_ceil(self, tz):
self.ts.ceil("5T")
self.ts.ceil("5min")


class TimestampAcrossDst:
Expand Down
4 changes: 2 additions & 2 deletions doc/source/user_guide/scale.rst
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ Suppose our raw dataset on disk has many columns.
return df

timeseries = [
make_timeseries(freq="1T", seed=i).rename(columns=lambda x: f"{x}_{i}")
make_timeseries(freq="1min", seed=i).rename(columns=lambda x: f"{x}_{i}")
for i in range(10)
]
ts_wide = pd.concat(timeseries, axis=1)
Expand Down Expand Up @@ -173,7 +173,7 @@ files. Each file in the directory represents a different year of the entire data
pathlib.Path("data/timeseries").mkdir(exist_ok=True)

for i, (start, end) in enumerate(zip(starts, ends)):
ts = make_timeseries(start=start, end=end, freq="1T", seed=i)
ts = make_timeseries(start=start, end=end, freq="1min", seed=i)
ts.to_parquet(f"data/timeseries/ts-{i:0>2d}.parquet")


Expand Down
2 changes: 1 addition & 1 deletion doc/source/user_guide/timedeltas.rst
Original file line number Diff line number Diff line change
Expand Up @@ -390,7 +390,7 @@ The ``freq`` parameter can passed a variety of :ref:`frequency aliases <timeseri

.. ipython:: python

pd.timedelta_range(start="1 days", end="2 days", freq="30T")
pd.timedelta_range(start="1 days", end="2 days", freq="30min")

pd.timedelta_range(start="1 days", periods=5, freq="2D5H")

Expand Down
52 changes: 31 additions & 21 deletions doc/source/user_guide/timeseries.rst
Original file line number Diff line number Diff line change
Expand Up @@ -603,7 +603,7 @@ would include matching times on an included date:
dft = pd.DataFrame(
np.random.randn(100000, 1),
columns=["A"],
index=pd.date_range("20130101", periods=100000, freq="T"),
index=pd.date_range("20130101", periods=100000, freq="min"),
)
dft
dft.loc["2013"]
Expand Down Expand Up @@ -905,11 +905,11 @@ into ``freq`` keyword arguments. The available date offsets and associated frequ
:class:`~pandas.tseries.offsets.CustomBusinessHour`, ``'CBH'``, "custom business hour"
:class:`~pandas.tseries.offsets.Day`, ``'D'``, "one absolute day"
:class:`~pandas.tseries.offsets.Hour`, ``'H'``, "one hour"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doesn't need changing in this PR (can be done as a follow-up, there's plenty of time until pandas 2.2), but if we're doing these renamings, then 'H' should probably be renamed too (and CBH, and BH)

I'd stop there - then there a simple rule: anything sub-daily is lowercase, anything daily or higher is upper-case

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the comments. I agree, it would be better to rename 'H', 'CBH', and 'BH' too, I'll do it in a separate PR. I like the idea to keep anything sub-daily lowercase, and daily or higher upper-case.

:class:`~pandas.tseries.offsets.Minute`, ``'T'`` or ``'min'``,"one minute"
:class:`~pandas.tseries.offsets.Minute`, ``'min'``,"one minute"
:class:`~pandas.tseries.offsets.Second`, ``'S'``, "one second"
:class:`~pandas.tseries.offsets.Milli`, ``'L'`` or ``'ms'``, "one millisecond"
:class:`~pandas.tseries.offsets.Micro`, ``'U'`` or ``'us'``, "one microsecond"
:class:`~pandas.tseries.offsets.Nano`, ``'N'``, "one nanosecond"
:class:`~pandas.tseries.offsets.Milli`, ``'ms'``, "one millisecond"
:class:`~pandas.tseries.offsets.Micro`, ``'us'``, "one microsecond"
:class:`~pandas.tseries.offsets.Nano`, ``'ns'``, "one nanosecond"

``DateOffsets`` additionally have :meth:`rollforward` and :meth:`rollback`
methods for moving a date forward or backward respectively to a valid offset
Expand Down Expand Up @@ -1264,11 +1264,16 @@ frequencies. We will refer to these aliases as *offset aliases*.
"BAS, BYS", "business year start frequency"
"BH", "business hour frequency"
"H", "hourly frequency"
"T, min", "minutely frequency"
"min", "minutely frequency"
"S", "secondly frequency"
"L, ms", "milliseconds"
"U, us", "microseconds"
"N", "nanoseconds"
"ms", "milliseconds"
"us", "microseconds"
"ns", "nanoseconds"

.. deprecated:: 2.1.0

Aliases ``T``, ``L``, ``U``, and ``N`` are deprecated in favour of the aliases
``min``, ``ms``, ``us``, and ``ns``.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks fine to me, I think it's a better experience if the list of aliases above teaches the new behaviour to begin with - but OK with adding T / L etc. back there if people prefer


.. note::

Expand Down Expand Up @@ -1318,11 +1323,16 @@ frequencies. We will refer to these aliases as *period aliases*.
"Q", "quarterly frequency"
"A, Y", "yearly frequency"
"H", "hourly frequency"
"T, min", "minutely frequency"
"min", "minutely frequency"
"S", "secondly frequency"
"L, ms", "milliseconds"
"U, us", "microseconds"
"N", "nanoseconds"
"ms", "milliseconds"
"us", "microseconds"
"ns", "nanoseconds"

.. deprecated:: 2.1.0

Aliases ``T``, ``L``, ``U``, and ``N`` are deprecated in favour of the aliases
``min``, ``ms``, ``us``, and ``ns``.


Combining aliases
Expand All @@ -1343,7 +1353,7 @@ You can combine together day and intraday offsets:

pd.date_range(start, periods=10, freq="2h20min")

pd.date_range(start, periods=10, freq="1D10U")
pd.date_range(start, periods=10, freq="1D10us")

Anchored offsets
~~~~~~~~~~~~~~~~
Expand Down Expand Up @@ -1725,11 +1735,11 @@ For upsampling, you can specify a way to upsample and the ``limit`` parameter to

# from secondly to every 250 milliseconds

ts[:2].resample("250L").asfreq()
ts[:2].resample("250ms").asfreq()

ts[:2].resample("250L").ffill()
ts[:2].resample("250ms").ffill()

ts[:2].resample("250L").ffill(limit=2)
ts[:2].resample("250ms").ffill(limit=2)

Sparse resampling
~~~~~~~~~~~~~~~~~
Expand All @@ -1752,7 +1762,7 @@ If we want to resample to the full range of the series:

.. ipython:: python

ts.resample("3T").sum()
ts.resample("3min").sum()

We can instead only resample those groups where we have points as follows:

Expand All @@ -1766,7 +1776,7 @@ We can instead only resample those groups where we have points as follows:
freq = to_offset(freq)
return pd.Timestamp((t.value // freq.delta.value) * freq.delta.value)

ts.groupby(partial(round, freq="3T")).sum()
ts.groupby(partial(round, freq="3min")).sum()

.. _timeseries.aggregate:

Expand All @@ -1786,7 +1796,7 @@ Resampling a ``DataFrame``, the default will be to act on all columns with the s
index=pd.date_range("1/1/2012", freq="S", periods=1000),
columns=["A", "B", "C"],
)
r = df.resample("3T")
r = df.resample("3min")
r.mean()

We can select a specific column or columns using standard getitem.
Expand Down Expand Up @@ -2155,7 +2165,7 @@ Passing a string representing a lower frequency than ``PeriodIndex`` returns par
dfp = pd.DataFrame(
np.random.randn(600, 1),
columns=["A"],
index=pd.period_range("2013-01-01 9:00", periods=600, freq="T"),
index=pd.period_range("2013-01-01 9:00", periods=600, freq="min"),
)
dfp
dfp.loc["2013-01-01 10H"]
Expand Down
11 changes: 9 additions & 2 deletions doc/source/whatsnew/v0.13.0.rst
Original file line number Diff line number Diff line change
Expand Up @@ -642,9 +642,16 @@ Enhancements
Period conversions in the range of seconds and below were reworked and extended
up to nanoseconds. Periods in the nanosecond range are now available.

.. ipython:: python
.. code-block:: python

pd.date_range('2013-01-01', periods=5, freq='5N')
In [79]: pd.date_range('2013-01-01', periods=5, freq='5N')
Out[79]:
DatetimeIndex([ '2013-01-01 00:00:00',
'2013-01-01 00:00:00.000000005',
'2013-01-01 00:00:00.000000010',
'2013-01-01 00:00:00.000000015',
'2013-01-01 00:00:00.000000020'],
dtype='datetime64[ns]', freq='5N')

or with frequency as offset

Expand Down
24 changes: 23 additions & 1 deletion doc/source/whatsnew/v0.15.0.rst
Original file line number Diff line number Diff line change
Expand Up @@ -185,7 +185,29 @@ Constructing a ``TimedeltaIndex`` with a regular range
.. ipython:: python

pd.timedelta_range('1 days', periods=5, freq='D')
pd.timedelta_range(start='1 days', end='2 days', freq='30T')

.. code-block:: python

In [20]: pd.timedelta_range(start='1 days', end='2 days', freq='30T')
Out[20]:
TimedeltaIndex(['1 days 00:00:00', '1 days 00:30:00', '1 days 01:00:00',
'1 days 01:30:00', '1 days 02:00:00', '1 days 02:30:00',
'1 days 03:00:00', '1 days 03:30:00', '1 days 04:00:00',
'1 days 04:30:00', '1 days 05:00:00', '1 days 05:30:00',
'1 days 06:00:00', '1 days 06:30:00', '1 days 07:00:00',
'1 days 07:30:00', '1 days 08:00:00', '1 days 08:30:00',
'1 days 09:00:00', '1 days 09:30:00', '1 days 10:00:00',
'1 days 10:30:00', '1 days 11:00:00', '1 days 11:30:00',
'1 days 12:00:00', '1 days 12:30:00', '1 days 13:00:00',
'1 days 13:30:00', '1 days 14:00:00', '1 days 14:30:00',
'1 days 15:00:00', '1 days 15:30:00', '1 days 16:00:00',
'1 days 16:30:00', '1 days 17:00:00', '1 days 17:30:00',
'1 days 18:00:00', '1 days 18:30:00', '1 days 19:00:00',
'1 days 19:30:00', '1 days 20:00:00', '1 days 20:30:00',
'1 days 21:00:00', '1 days 21:30:00', '1 days 22:00:00',
'1 days 22:30:00', '1 days 23:00:00', '1 days 23:30:00',
'2 days 00:00:00'],
dtype='timedelta64[ns]', freq='30T')

You can now use a ``TimedeltaIndex`` as the index of a pandas object

Expand Down
3 changes: 3 additions & 0 deletions doc/source/whatsnew/v2.1.0.rst
Original file line number Diff line number Diff line change
Expand Up @@ -549,6 +549,9 @@ Other Deprecations
- Deprecated parameter ``obj`` in :meth:`GroupBy.get_group` (:issue:`53545`)
- Deprecated positional indexing on :class:`Series` with :meth:`Series.__getitem__` and :meth:`Series.__setitem__`, in a future version ``ser[item]`` will *always* interpret ``item`` as a label, not a position (:issue:`50617`)
- Deprecated replacing builtin and NumPy functions in ``.agg``, ``.apply``, and ``.transform``; use the corresponding string alias (e.g. ``"sum"`` for ``sum`` or ``np.sum``) instead (:issue:`53425`)
- Deprecated strings ``T``, ``L``, ``U``, and ``N`` denoting aliases for time series frequencies. Please use ``min``, ``ms``, ``us``, and ``ns`` instead of ``T``, ``L``, ``U``, and ``N`` (:issue:`52536`)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we specify what functions/methods are relevant here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I think we should. I’ll do that.

- Deprecated strings ``T``, ``L``, ``U``, and ``N`` denoting resolutions in :meth:`Timedelta.resolution_string`. Please use ``min``, ``ms``, ``us``, and ``ns`` instead of ``T``, ``L``, ``U``, and ``N`` (:issue:`52536`)
- Deprecated strings ``T``, ``L``, ``U``, and ``N`` denoting units in :class:`Timedelta`. Please use ``min``, ``ms``, ``us``, and ``ns`` instead of ``T``, ``L``, ``U``, and ``N`` (:issue:`52536`)
- Deprecated strings ``T``, ``t``, ``L`` and ``l`` denoting units in :func:`to_timedelta` (:issue:`52536`)
- Deprecated the "method" and "limit" keywords in ``ExtensionArray.fillna``, implement and use ``pad_or_backfill`` instead (:issue:`53621`)
- Deprecated the "method" and "limit" keywords on :meth:`Series.fillna`, :meth:`DataFrame.fillna`, :meth:`SeriesGroupBy.fillna`, :meth:`DataFrameGroupBy.fillna`, and :meth:`Resampler.fillna`, use ``obj.bfill()`` or ``obj.ffill()`` instead (:issue:`53394`)
Expand Down
Loading