Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add improved docstring processing #2885

Open
wants to merge 55 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 11 commits
Commits
Show all changes
55 commits
Select commit Hold shift + click to select a range
f263927
Add improved docstring processing
TomFryers Feb 17, 2022
be791f5
Add PR number to changelog
TomFryers Feb 17, 2022
073a55d
separate CHANGELOG section for preview style
JelleZijlstra Feb 21, 2022
52a36b5
Make fix_docstring preview parameter keyword-only
TomFryers Feb 21, 2022
3895c4d
Pretend to be in preview mode for diff-shades
TomFryers Feb 21, 2022
30e0ad3
Merge branch 'psf:main' into main
TomFryers Feb 21, 2022
0802ada
Merge remote-tracking branch 'origin/previewstyle'
TomFryers Feb 21, 2022
2541264
Revert "Pretend to be in preview mode for diff-shades"
TomFryers Feb 21, 2022
6156ea5
Merge branch 'main' of https://github.com/psf/black
TomFryers Feb 24, 2022
975767f
Blacken Black
TomFryers Feb 24, 2022
8adc896
Shorten over-long docstrings
TomFryers Feb 24, 2022
e5cdcff
Move opening quotes off own line
TomFryers Mar 2, 2022
7ed5fd2
Fix first-line indentation
TomFryers Mar 2, 2022
ebb4a32
Revert "Revert "Pretend to be in preview mode for diff-shades""
TomFryers Mar 2, 2022
a79d394
Undo temporary changes
TomFryers Mar 2, 2022
d223a97
Merge branch 'psf:main' into main
TomFryers Mar 11, 2022
ea07503
Merge remote-tracking branch 'origin/main'
TomFryers Mar 16, 2022
45a52de
Merge remote-tracking branch 'origin/main'
TomFryers Mar 25, 2022
54e8451
Merge remote-tracking branch 'origin/main'
TomFryers Mar 27, 2022
e073b63
Merge remote-tracking branch 'origin/main'
TomFryers Mar 28, 2022
76c5c13
Merge remote-tracking branch 'origin/main'
TomFryers Apr 5, 2022
06b2587
Merge branch 'main' into main
JelleZijlstra Apr 11, 2022
54976a3
Merge remote-tracking branch 'origin/main'
TomFryers Apr 22, 2022
2360449
Merge remote-tracking branch 'origin/main'
TomFryers May 5, 2022
ba84a7c
Merge remote-tracking branch 'origin/main'
TomFryers May 23, 2022
7b33820
Merge remote-tracking branch 'origin/main'
TomFryers Jun 10, 2022
8fc0430
Fix tests
TomFryers Jun 10, 2022
4b5f3c7
Fix bad single-line docstring indentation
TomFryers Jun 10, 2022
25be451
Merge remote-tracking branch 'origin/main'
TomFryers Jun 11, 2022
f51ea0a
Remove merge conflict residue
TomFryers Jun 11, 2022
b638eeb
Merge remote-tracking branch 'origin/main'
TomFryers Jul 9, 2022
5ad579a
Merge remote-tracking branch 'origin/main'
TomFryers Aug 2, 2022
08ec335
Merge remote-tracking branch 'origin/main'
TomFryers Aug 3, 2022
d72db33
Merge remote-tracking branch 'origin/main'
TomFryers Aug 17, 2022
f6d9127
Merge remote-tracking branch 'origin/main'
TomFryers Aug 30, 2022
a34724e
Blacken unblackened file
TomFryers Aug 30, 2022
0c872ce
Merge remote-tracking branch 'origin/main'
TomFryers Sep 2, 2022
b8d9c2e
Remove rogue merge marker
TomFryers Sep 2, 2022
2e71f32
Add whitespace to CHANGES.md
TomFryers Sep 2, 2022
b98a6a6
Merge remote-tracking branch 'origin/main'
TomFryers Sep 15, 2022
1595441
Merge branch 'main' into main
TomFryers Sep 26, 2022
a58d21c
Merge branch 'main' into main
TomFryers Oct 18, 2022
67cd06c
Merge branch 'main' into main
TomFryers Oct 27, 2022
3b089e3
Remove extra blank line
TomFryers Oct 27, 2022
9c6d257
Put LinesBlock docstring quotes on their own line
TomFryers Oct 27, 2022
ffaae19
Merge branch 'main' into main
TomFryers Nov 3, 2022
86d806c
Merge branch 'main' into main
TomFryers Nov 9, 2022
e5e2da6
Merge remote-tracking branch 'origin/main'
TomFryers Nov 10, 2022
9ba93f5
Use new quote style
TomFryers Nov 10, 2022
12dd417
Merge remote-tracking branch 'origin/main'
TomFryers Feb 9, 2023
34dbe7e
Fix docstring preview tests
TomFryers Feb 9, 2023
88fe1aa
Blacken Black
TomFryers Feb 9, 2023
c1f6e4a
Shorten docstring to fit in limit
TomFryers Feb 9, 2023
2f16112
Merge remote-tracking branch 'origin/main'
TomFryers Mar 31, 2024
b8c169f
Fix documentation link
TomFryers Mar 31, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions CHANGES.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,8 @@

<!-- Changes that affect Black's preview style -->

- Format docstrings to have consistent quote placement (#2885)
TomFryers marked this conversation as resolved.
Show resolved Hide resolved

### _Blackd_

<!-- Changes to blackd -->
Expand Down
6 changes: 6 additions & 0 deletions docs/the_black_code_style/future_style.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,3 +49,9 @@ plain strings. User-made splits are respected when they do not exceed the line l
limit. Line continuation backslashes are converted into parenthesized strings.
Unnecessary parentheses are stripped. The stability and status of this feature is
tracked in [this issue](https://github.com/psf/black/issues/2188).

### Improved docstring processing

_Black_ will ensure docstrings are formatted consistently, by removing extra blank lines
at the beginning and end of docstrings, ensuring the opening and closing quotes are on
their own lines and collapsing docstrings with a single line of text down to one line.
59 changes: 37 additions & 22 deletions src/black/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -113,7 +113,8 @@ def from_configuration(
def read_pyproject_toml(
ctx: click.Context, param: click.Parameter, value: Optional[str]
) -> Optional[str]:
"""Inject Black configuration from "pyproject.toml" into defaults in `ctx`.
"""
Inject Black configuration from "pyproject.toml" into defaults in `ctx`.

Returns the path to a successfully found and read configuration file, None
otherwise.
Expand Down Expand Up @@ -159,7 +160,8 @@ def read_pyproject_toml(
def target_version_option_callback(
c: click.Context, p: Union[click.Option, click.Parameter], v: Tuple[str, ...]
) -> List[TargetVersion]:
"""Compute the target versions from a --target-version flag.
"""
Compute the target versions from a --target-version flag.

This is its own function because mypy couldn't infer the type correctly
when it was a lambda, causing mypyc trouble.
Expand All @@ -168,7 +170,8 @@ def target_version_option_callback(


def re_compile_maybe_verbose(regex: str) -> Pattern[str]:
"""Compile a regular expression string in `regex`.
"""
Compile a regular expression string in `regex`.

If it contains newlines, use verbose mode.
"""
Expand Down Expand Up @@ -664,9 +667,7 @@ def get_sources(
def path_empty(
src: Sized, msg: str, quiet: bool, verbose: bool, ctx: click.Context
) -> None:
"""
Exit if there is no `src` provided for formatting
"""
"""Exit if there is no `src` provided for formatting"""
if not src:
if verbose or not quiet:
out(msg)
Expand Down Expand Up @@ -700,7 +701,8 @@ def reformat_code(
def reformat_one(
src: Path, fast: bool, write_back: WriteBack, mode: Mode, report: "Report"
) -> None:
"""Reformat a single file under `src` without spawning child processes.
"""
Reformat a single file under `src` without spawning child processes.

`fast`, `write_back`, and `mode` options are passed to
:func:`format_file_in_place` or :func:`format_stdin_to_stdout`.
Expand Down Expand Up @@ -803,7 +805,8 @@ async def schedule_formatting(
loop: asyncio.AbstractEventLoop,
executor: Executor,
) -> None:
"""Run formatting of `sources` in parallel using the provided `executor`.
"""
Run formatting of `sources` in parallel using the provided `executor`.

(Use ProcessPoolExecutors for actual parallelism.)

Expand Down Expand Up @@ -875,7 +878,8 @@ def format_file_in_place(
write_back: WriteBack = WriteBack.NO,
lock: Any = None, # multiprocessing.Manager().Lock() is some crazy proxy
) -> bool:
"""Format file under `src` path. Return True if changed.
"""
Format file under `src` path. Return True if changed.

If `write_back` is DIFF, write a diff to stdout. If it is YES, write reformatted
code to the file.
Expand Down Expand Up @@ -934,7 +938,8 @@ def format_stdin_to_stdout(
write_back: WriteBack = WriteBack.NO,
mode: Mode,
) -> bool:
"""Format file on stdin. Return True if changed.
"""
Format file on stdin. Return True if changed.

If content is None, it's read from sys.stdin.

Expand Down Expand Up @@ -981,7 +986,8 @@ def format_stdin_to_stdout(
def check_stability_and_equivalence(
src_contents: str, dst_contents: str, *, mode: Mode
) -> None:
"""Perform stability and equivalence checks.
"""
Perform stability and equivalence checks.

Raise AssertionError if source and destination contents are not
equivalent, or if a second pass of the formatter would format the
Expand All @@ -992,7 +998,8 @@ def check_stability_and_equivalence(


def format_file_contents(src_contents: str, *, fast: bool, mode: Mode) -> FileContent:
"""Reformat contents of a file and return new contents.
"""
Reformat contents of a file and return new contents.

If `fast` is False, additionally confirm that the reformatted code is
valid by calling :func:`assert_equivalent` and :func:`assert_stable` on it.
Expand All @@ -1015,7 +1022,8 @@ def format_file_contents(src_contents: str, *, fast: bool, mode: Mode) -> FileCo


def validate_cell(src: str, mode: Mode) -> None:
"""Check that cell does not already contain TransformerManager transformations,
"""
Check that cell does not already contain TransformerManager transformations,
or non-Python cell magics, which might cause tokenizer_rt to break because of
indentations.

Expand All @@ -1041,7 +1049,8 @@ def validate_cell(src: str, mode: Mode) -> None:


def format_cell(src: str, *, fast: bool, mode: Mode) -> str:
"""Format code in given cell of Jupyter notebook.
"""
Format code in given cell of Jupyter notebook.

General idea is:

Expand Down Expand Up @@ -1078,7 +1087,8 @@ def format_cell(src: str, *, fast: bool, mode: Mode) -> str:


def validate_metadata(nb: MutableMapping[str, Any]) -> None:
"""If notebook is marked as non-Python, don't format it.
"""
If notebook is marked as non-Python, don't format it.

All notebook metadata fields are optional, see
https://nbformat.readthedocs.io/en/latest/format_description.html. So
Expand All @@ -1090,7 +1100,8 @@ def validate_metadata(nb: MutableMapping[str, Any]) -> None:


def format_ipynb_string(src_contents: str, *, fast: bool, mode: Mode) -> FileContent:
"""Format Jupyter notebook.
"""
Format Jupyter notebook.

Operate cell-by-cell, only on code cells, only for Python notebooks.
If the ``.ipynb`` originally had a trailing newline, it'll be preserved.
Expand Down Expand Up @@ -1119,7 +1130,8 @@ def format_ipynb_string(src_contents: str, *, fast: bool, mode: Mode) -> FileCon


def format_str(src_contents: str, *, mode: Mode) -> str:
"""Reformat a string and return new contents.
"""
Reformat a string and return new contents.

`mode` determines formatting options, such as how many characters per line are
allowed. Example:
Expand All @@ -1146,7 +1158,6 @@ def f(
arg: str = '',
) -> None:
hey

"""
dst_contents = _format_str_once(src_contents, mode=mode)
# Forced second pass to work around optional trailing commas (becoming
Expand Down Expand Up @@ -1188,7 +1199,8 @@ def _format_str_once(src_contents: str, *, mode: Mode) -> str:


def decode_bytes(src: bytes) -> Tuple[FileContent, Encoding, NewLine]:
"""Return a tuple of (decoded_contents, encoding, newline).
"""
Return a tuple of (decoded_contents, encoding, newline).

`newline` is either CRLF or LF but `decoded_contents` is decoded with
universal newlines (i.e. only contains LF).
Expand All @@ -1207,7 +1219,8 @@ def decode_bytes(src: bytes) -> Tuple[FileContent, Encoding, NewLine]:
def get_features_used( # noqa: C901
node: Node, *, future_imports: Optional[Set[str]] = None
) -> Set[Feature]:
"""Return a set of (relatively) new Python features used in this file.
"""
Return a set of (relatively) new Python features used in this file.

Currently looking for:
- f-strings;
Expand Down Expand Up @@ -1406,15 +1419,17 @@ def assert_stable(src: str, dst: str, mode: Mode) -> None:

@contextmanager
def nullcontext() -> Iterator[None]:
"""Return an empty context manager.
"""
Return an empty context manager.

To be used like `nullcontext` in Python 3.7.
"""
yield


def patch_click() -> None:
"""Make Click not crash on Python 3.6 with LANG=C.
"""
Make Click not crash on Python 3.6 with LANG=C.

On certain misconfigured environments, Python 3 selects the ASCII encoding as the
default which restricts paths that it can access during the lifetime of the
Expand Down
24 changes: 16 additions & 8 deletions src/black/brackets.py
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,8 @@ class BracketTracker:
invisible: List[Leaf] = field(default_factory=list)

def mark(self, leaf: Leaf) -> None:
"""Mark `leaf` with bracket-related metadata. Keep track of delimiters.
"""
Mark `leaf` with bracket-related metadata. Keep track of delimiters.

All leaves receive an int `bracket_depth` field that stores how deep
within brackets a given leaf is. 0 means there are no enclosing brackets
Expand Down Expand Up @@ -120,15 +121,17 @@ def any_open_brackets(self) -> bool:
return bool(self.bracket_match)

def max_delimiter_priority(self, exclude: Iterable[LeafID] = ()) -> Priority:
"""Return the highest priority of a delimiter found on the line.
"""
Return the highest priority of a delimiter found on the line.

Values are consistent with what `is_split_*_delimiter()` return.
Raises ValueError on no delimiters.
"""
return max(v for k, v in self.delimiters.items() if k not in exclude)

def delimiter_count_with_priority(self, priority: Priority = 0) -> int:
"""Return the number of delimiters with the given `priority`.
"""
Return the number of delimiters with the given `priority`.

If no `priority` is passed, defaults to max priority on the line.
"""
Expand All @@ -139,7 +142,8 @@ def delimiter_count_with_priority(self, priority: Priority = 0) -> int:
return sum(1 for p in self.delimiters.values() if p == priority)

def maybe_increment_for_loop_variable(self, leaf: Leaf) -> bool:
"""In a for loop, or comprehension, the variables are often unpacks.
"""
In a for loop, or comprehension, the variables are often unpacks.

To avoid splitting on the comma in this situation, increase the depth of
tokens between `for` and `in`.
Expand All @@ -166,7 +170,8 @@ def maybe_decrement_after_for_loop_variable(self, leaf: Leaf) -> bool:
return False

def maybe_increment_lambda_arguments(self, leaf: Leaf) -> bool:
"""In a lambda expression, there might be more than one argument.
"""
In a lambda expression, there might be more than one argument.

To avoid splitting on the comma in this situation, increase the depth of
tokens between `lambda` and `:`.
Expand Down Expand Up @@ -197,7 +202,8 @@ def get_open_lsqb(self) -> Optional[Leaf]:


def is_split_after_delimiter(leaf: Leaf, previous: Optional[Leaf] = None) -> Priority:
"""Return the priority of the `leaf` delimiter, given a line break after it.
"""
Return the priority of the `leaf` delimiter, given a line break after it.

The delimiter priorities returned here are from those delimiters that would
cause a line break after themselves.
Expand All @@ -211,7 +217,8 @@ def is_split_after_delimiter(leaf: Leaf, previous: Optional[Leaf] = None) -> Pri


def is_split_before_delimiter(leaf: Leaf, previous: Optional[Leaf] = None) -> Priority:
"""Return the priority of the `leaf` delimiter, given a line break before it.
"""
Return the priority of the `leaf` delimiter, given a line break before it.

The delimiter priorities returned here are from those delimiters that would
cause a line break before themselves.
Expand Down Expand Up @@ -307,7 +314,8 @@ def is_split_before_delimiter(leaf: Leaf, previous: Optional[Leaf] = None) -> Pr


def max_delimiter_priority_in_atom(node: LN) -> Priority:
"""Return maximum delimiter priority inside `node`.
"""
Return maximum delimiter priority inside `node`.

This is specific to atoms with contents contained in a pair of parentheses.
If `node` isn't an atom or there are no enclosing parentheses, returns 0.
Expand Down
9 changes: 6 additions & 3 deletions src/black/cache.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,8 @@


def get_cache_dir() -> Path:
"""Get the cache directory used by black.
"""
Get the cache directory used by black.

Users can customize this directory on all systems using `BLACK_CACHE_DIR`
environment variable. By default, the cache directory is the user cache directory
Expand All @@ -40,7 +41,8 @@ def get_cache_dir() -> Path:


def read_cache(mode: Mode) -> Cache:
"""Read the cache if it exists and is well formed.
"""
Read the cache if it exists and is well formed.

If it is not well formed, the call to write_cache later should resolve the issue.
"""
Expand Down Expand Up @@ -68,7 +70,8 @@ def get_cache_info(path: Path) -> CacheInfo:


def filter_cached(cache: Cache, sources: Iterable[Path]) -> Tuple[Set[Path], Set[Path]]:
"""Split an iterable of paths in `sources` into two sets.
"""
Split an iterable of paths in `sources` into two sets.

The first contains paths of files that modified on disk or are not in the
cache. The other contains paths to non-modified files.
Expand Down
18 changes: 12 additions & 6 deletions src/black/comments.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,8 @@

@dataclass
class ProtoComment:
"""Describes a piece of syntax that is a comment.
"""
Describes a piece of syntax that is a comment.

It's not a :class:`blib2to3.pytree.Leaf` so that:

Expand All @@ -43,7 +44,8 @@ class ProtoComment:


def generate_comments(leaf: LN) -> Iterator[Leaf]:
"""Clean the prefix of the `leaf` and generate comments from it, if any.
"""
Clean the prefix of the `leaf` and generate comments from it, if any.

Comments in lib2to3 are shoved into the whitespace prefix. This happens
in `pgen2/driver.py:Driver.parse_tokens()`. This was a brilliant implementation
Expand Down Expand Up @@ -103,7 +105,8 @@ def list_comments(prefix: str, *, is_endmarker: bool) -> List[ProtoComment]:


def make_comment(content: str) -> str:
"""Return a consistently formatted comment from the given `content` string.
"""
Return a consistently formatted comment from the given `content` string.

All comments (except for "##", "#!", "#:", '#'", "#%%") should have a single
space between the hash sign and the content.
Expand Down Expand Up @@ -136,7 +139,8 @@ def normalize_fmt_off(node: Node) -> None:


def convert_one_fmt_off_pair(node: Node) -> bool:
"""Convert content of a single `# fmt: off`/`# fmt: on` into a standalone comment.
"""
Convert content of a single `# fmt: off`/`# fmt: on` into a standalone comment.

Returns True if a pair was converted.
"""
Expand Down Expand Up @@ -198,7 +202,8 @@ def convert_one_fmt_off_pair(node: Node) -> bool:


def generate_ignored_nodes(leaf: Leaf, comment: ProtoComment) -> Iterator[LN]:
"""Starting from the container of `leaf`, generate all leaves until `# fmt: on`.
"""
Starting from the container of `leaf`, generate all leaves until `# fmt: on`.

If comment is skip, returns leaf only.
Stops at the end of the block.
Expand Down Expand Up @@ -236,7 +241,8 @@ def generate_ignored_nodes(leaf: Leaf, comment: ProtoComment) -> Iterator[LN]:


def is_fmt_on(container: LN) -> bool:
"""Determine whether formatting is switched on within a container.
"""
Determine whether formatting is switched on within a container.
Determined by whether the last `# fmt:` comment is `on` or `off`.
"""
fmt_on = False
Expand Down
Loading