Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Setup sybil #1574

Merged
merged 23 commits into from
Jun 16, 2023
Merged
Show file tree
Hide file tree
Changes from 19 commits
Commits
Show all changes
23 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/release.yml
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ jobs:
name: Build release tarball
runs-on: ubuntu-latest
container:
image: glotzerlab/ci:2023.05-ubuntu20.04
image: glotzerlab/ci:2023.06-ubuntu20.04

steps:
- name: Checkout
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/templates/workflow.yml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
<% block name %><% endblock %>
<% set container_prefix="glotzerlab/ci:2023.05" %>
<% set container_prefix="glotzerlab/ci:2023.06" %>

<% block concurrency %>
concurrency:
Expand Down
14 changes: 7 additions & 7 deletions .github/workflows/test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ jobs:
name: Build [${{ join(matrix.config, '_') }}]
runs-on: ${{ matrix.build_runner }}
container:
image: glotzerlab/ci:2023.05-${{ matrix.config[0] }}
image: glotzerlab/ci:2023.06-${{ matrix.config[0] }}
strategy:
matrix:
include:
Expand Down Expand Up @@ -168,7 +168,7 @@ jobs:
needs: build
runs-on: ${{ matrix.test_runner }}
container:
image: glotzerlab/ci:2023.05-${{ matrix.config[0] }}
image: glotzerlab/ci:2023.06-${{ matrix.config[0] }}
options: ${{ matrix.test_docker_options }} -e CUDA_VISIBLE_DEVICES
strategy:
matrix:
Expand Down Expand Up @@ -229,7 +229,7 @@ jobs:
needs: build
runs-on: ${{ matrix.test_runner }}
container:
image: glotzerlab/ci:2023.05-${{ matrix.config[0] }}
image: glotzerlab/ci:2023.06-${{ matrix.config[0] }}
options: ${{ matrix.test_docker_options }} -e CUDA_VISIBLE_DEVICES
strategy:
matrix:
Expand Down Expand Up @@ -283,7 +283,7 @@ jobs:
needs: build
runs-on: ${{ matrix.test_runner }}
container:
image: glotzerlab/ci:2023.05-${{ matrix.config[0] }}
image: glotzerlab/ci:2023.06-${{ matrix.config[0] }}
options: ${{ matrix.test_docker_options }} -e CUDA_VISIBLE_DEVICES
strategy:
matrix:
Expand Down Expand Up @@ -334,7 +334,7 @@ jobs:
name: Build [${{ join(matrix.config, '_') }}]
runs-on: ${{ matrix.build_runner }}
container:
image: glotzerlab/ci:2023.05-${{ matrix.config[0] }}
image: glotzerlab/ci:2023.06-${{ matrix.config[0] }}
strategy:
matrix:
include:
Expand Down Expand Up @@ -445,7 +445,7 @@ jobs:
needs: build_release
runs-on: ${{ matrix.test_runner }}
container:
image: glotzerlab/ci:2023.05-${{ matrix.config[0] }}
image: glotzerlab/ci:2023.06-${{ matrix.config[0] }}
options: ${{ matrix.test_docker_options }} -e CUDA_VISIBLE_DEVICES
strategy:
matrix:
Expand Down Expand Up @@ -512,7 +512,7 @@ jobs:
needs: build_release
runs-on: ${{ matrix.test_runner }}
container:
image: glotzerlab/ci:2023.05-${{ matrix.config[0] }}
image: glotzerlab/ci:2023.06-${{ matrix.config[0] }}
options: ${{ matrix.test_docker_options }} -e CUDA_VISIBLE_DEVICES
strategy:
matrix:
Expand Down
28 changes: 28 additions & 0 deletions hoomd/conftest.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,9 @@
import atexit
import os
import numpy
import sybil
import sybil.parsers.rest

from hoomd.logging import LoggerCategories
from hoomd.snapshot import Snapshot
from hoomd import Simulation
Expand All @@ -33,6 +36,31 @@
devices.append(hoomd.device.GPU)


def setup_sybil_tests(namespace):
"""Sybil setup function."""
# Allow documentation tests to use numpy.
namespace['numpy'] = numpy

namespace['gpu_not_available'] = _n_available_gpu == 0


pytest_collect_file = sybil.Sybil(
parsers=[
sybil.parsers.rest.PythonCodeBlockParser(),
sybil.parsers.rest.SkipParser(),
],
# Despite being documented as fnmatch syntax, in practice patterns matches
# whole relative paths. TODO:when all code examples function, search
# *.py, */*.py, */*/*.py, ... as many levels deep as needed.
patterns=[
'device.py',
'md/methods/methods.py',
'md/methods/thermostats.py',
],
setup=setup_sybil_tests,
fixtures=['tmp_path']).pytest()


@pytest.fixture(scope='session', params=devices)
def device(request):
"""Parameterized Device fixture.
Expand Down
96 changes: 91 additions & 5 deletions hoomd/device.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,12 +12,29 @@
different `hoomd.Simulation` object. One `Device` object may also be shared
with many `hoomd.Simulation` objects.

.. rubric:: Examples:

.. code-block:: python

cpu = hoomd.device.CPU()

.. skip: next if(gpu_not_available)

.. code-block:: python

gpu = hoomd.device.GPU()

Tip:
Reuse `Device` objects when possible. There is a non-negligible overhead
to creating each `Device`, especially on the GPU.

See Also:
`hoomd.Simulation`

.. invisible-code-block: python

# Rename tmp_path to path to avoid giving the users the wrong signal.
path = tmp_path
"""

import contextlib
Expand All @@ -32,6 +49,12 @@ class NoticeFile:
device (`Device`): The `Device` object.
level (int): Message notice level. Default value is 1.

.. rubric:: Example:

.. code-block:: python

notice_file = hoomd.device.NoticeFile(device=cpu)

Note:
Use this in combination with `Device.message_filename` to combine notice
messages with output from code that expects file-like objects (such as
Expand All @@ -48,6 +71,12 @@ def write(self, message):

Args:
message (str): Message to write.

.. rubric:: Example:

.. code-block:: python

notice_file.write('Message\\n')
"""
self._buff += message

Expand Down Expand Up @@ -120,6 +149,12 @@ def notice_level(self):
default level of 2 shows messages that the developers expect most users
will want to see. Set the level lower to reduce verbosity or as high as
10 to get extremely verbose debugging messages.

.. rubric:: Example:

.. code-block:: python

cpu.notice_level = 4
"""
return self._cpp_msg.getNoticeLevel()

Expand All @@ -141,18 +176,30 @@ def message_filename(self):
Set `message_filename` to `None` to use the system's ``stdout`` and
``stderr``.

.. rubric:: Examples:

.. code-block:: python

cpu.message_filename = str(path / 'messages.log')

.. code-block:: python

cpu.message_filename = None

Note:
All MPI ranks within a given partition must open the same file.
To ensure this, the given file name on rank 0 is broadcast to the
other ranks. Different partitions may open separate files. For
example:

.. code::
.. skip: next if(cpu.communicator.num_ranks % 2 != 0)
joaander marked this conversation as resolved.
Show resolved Hide resolved

.. code-block:: python

communicator = hoomd.communicator.Communicator(
ranks_per_partition=2)
filename = f'messages.{communicator.partition}'
device = hoomd.device.GPU(communicator=communicator,
device = hoomd.device.CPU(communicator=communicator,
message_filename=filename)
"""
return self._message_filename
Expand Down Expand Up @@ -197,6 +244,12 @@ def notice(self, message, level=1):
Write the given message string to the output defined by
`message_filename` on MPI rank 0 when `notice_level` >= ``level``.

.. rubric:: Example:

.. code-block:: python

cpu.notice('Message')

Hint:
Use `notice` instead of `print` to write status messages and your
scripts will work well in parallel MPI jobs. `notice` writes message
Expand Down Expand Up @@ -276,6 +329,14 @@ class GPU(Device):
that all GPUs support concurrent managed memory access and have high
bandwidth interconnects.

.. rubric:: Example:

.. skip: next if(gpu_not_available)

.. code-block:: python

gpu = hoomd.device.GPU()

"""

def __init__(
Expand Down Expand Up @@ -307,6 +368,14 @@ def gpu_error_checking(self):
When `False` (the default), error messages from the GPU may not be
noticed immediately. Set to `True` to increase the accuracy of the GPU
error messages at the cost of significantly reduced performance.

.. rubric:: Example:

.. skip: next if(gpu_not_available)

.. code-block:: python

gpu.gpu_error_checking = True
"""
return self._cpp_exec_conf.isCUDAErrorCheckingEnabled()

Expand Down Expand Up @@ -361,10 +430,15 @@ def enable_profiling(self):
context manager and continue the simulation for a time. Profiling stops
when the context manager closes.

Example::
.. rubric:: Example:

.. skip: next if(gpu_not_available)

with device.enable_profiling():
sim.run(1000)
.. code-block:: python

simulation = hoomd.util.make_example_simulation(device=gpu)
with gpu.enable_profiling():
simulation.run(1000)
"""
try:
self._cpp_exec_conf.hipProfileStart()
Expand Down Expand Up @@ -392,6 +466,12 @@ class CPU(Device):
.. rubric:: MPI

In MPI execution environments, create a `CPU` device on every rank.

.. rubric:: Example:

.. code-block:: python

cpu = hoomd.device.CPU()
"""

def __init__(
Expand Down Expand Up @@ -432,6 +512,12 @@ def auto_select(

Returns:
Instance of `GPU` if availabile, otherwise `CPU`.

.. rubric:: Example:

.. code-block:: python

device = hoomd.device.auto_select()
"""
# Set class according to C++ object
if len(GPU.get_available_devices()) > 0:
Expand Down
15 changes: 15 additions & 0 deletions hoomd/md/methods/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,21 @@
.. rubric:: Integration methods with constraints

For methods that constrain motion to a manifold see `hoomd.md.methods.rattle`.

.. rubric:: Preparation
joaander marked this conversation as resolved.
Show resolved Hide resolved

Create a `hoomd.md.Integrator` to accept an integration method (or methods):

.. code-block:: python

simulation = hoomd.util.make_example_simulation()
simulation.operations.integrator = hoomd.md.Integrator(dt=0.001)

Create a `hoomd.logging.Logger` that will be used in the examples below:
joaander marked this conversation as resolved.
Show resolved Hide resolved

.. code-block:: python

logger = hoomd.logging.Logger()
"""

from . import rattle
Expand Down
Loading