Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Complete refactor of TableView and table.html template #1518

Closed
simonw opened this issue Nov 19, 2021 · 46 comments
Closed

Complete refactor of TableView and table.html template #1518

simonw opened this issue Nov 19, 2021 · 46 comments

Comments

@simonw
Copy link
Owner

simonw commented Nov 19, 2021

Split from #878. The current TableView class is by far the most complex part of Datasette, and the most difficult to work on: https://github.com/simonw/datasette/blob/0.59.2/datasette/views/table.py

In #878 I started exploring a new pattern for building views. In doing so it became clear that TableView is the first beast that I need to slay - if I can refactor that into something neat the pattern for building other views will emerge as a natural consequence.

I've been trying to build this as a register_routes() plugin, as originally suggested in #870 - though unfortunately it looks like those plugins can't replace existing Datasette default views at the moment, see #1517. [UPDATE: I was wrong about this, plugins can over-ride default views just fine]

I also know that I want to have a fully documented template context for table.html as a major step on the way to Datasette 1.0, see #1510.

All of this adds up to the TableView factor being a major project that will unblock a whole flurry of other things - so I'm going to work on that in this separate issue.

@simonw
Copy link
Owner Author

simonw commented Nov 19, 2021

Here's where I got to with my hacked-together initial plugin prototype - it managed to render the table page with some rows on it (and a bunch of missing functionality such as filters): https://gist.github.com/simonw/281eac9c73b062c3469607ad86470eb2

fixtures__roadside_attractions__4_rows_and__11__Liked___Twitter

@simonw
Copy link
Owner Author

simonw commented Nov 19, 2021

Ideally I'd like to execute the existing test suite against the new implementation - that would require me to solve this so I can replace the view with the plugin version though:

@simonw
Copy link
Owner Author

simonw commented Nov 19, 2021

I was wrong about that, you CAN over-ride default routes already.

@simonw
Copy link
Owner Author

simonw commented Nov 19, 2021

A (likely incomplete) list of features on the table page:

  • Display table/database/instance metadata
  • Show count of all results
  • Display table of results
    • Special table display treatment for URLs, numbers
    • Allow plugins to modify table cells
    • Respect ?_col= and ?_nocol=
  • Show interface for filtering by columns and operations
  • Show search box, support executing FTS searches
  • Sort table by specified column
  • Paginate table
  • Show facet results
  • Show suggested facets
  • Link to available exports
  • Display schema for table
    • Maybe it should show the SQL for the query too?
  • Handle various non-obvious querystring options, like ?_where= and ?_through=

@simonw
Copy link
Owner Author

simonw commented Nov 19, 2021

My goal is to break up a lot of this functionality into separate methods. These methods can be executed in parallel by asyncinject, but more importantly they can be used to build a much better JSON representation, where the default representation is lighter and ?_extra=x options can be used to execute more expensive portions and add them to the response.

So the HTML version itself needs to be re-written to use those JSON extras.

@simonw
Copy link
Owner Author

simonw commented Nov 19, 2021

Right now the HTML version gets to cheat - it passes through objects that are not JSON serializable, including custom functions that can then be called by Jinja.

I'm interested in maybe removing this cheating - if the HTML version could only request JSON-serializable extras those could be exposed in the API as well.

It would also help cleanup the kind-of-nasty pattern I use in the current BaseView where everything returns both a bunch of JSON-serializable data AND an awaitable function that then gets to add extra things to the HTML context.

@simonw
Copy link
Owner Author

simonw commented Nov 19, 2021

... and while I'm doing all of this I can rewrite the templates to not use those cheating magical functions AND document the template context at the same time, refs:

@simonw
Copy link
Owner Author

simonw commented Nov 19, 2021

Very confused by this piece of code here:

class Row:
def __init__(self, cells):
self.cells = cells
def __iter__(self):
return iter(self.cells)
def __getitem__(self, key):
for cell in self.cells:
if cell["column"] == key:
return cell["raw"]
raise KeyError
def display(self, key):
for cell in self.cells:
if cell["column"] == key:
return cell["value"]
return None
def __str__(self):
d = {
key: self[key]
for key in [
c["column"] for c in self.cells if not c.get("is_special_link_column")
]
}
return json.dumps(d, default=repr, indent=2)

I added it in 754836e - in the new world that should probably be replaced by pure JSON.

Aha - this comment explains it: #521 (comment)

I think the trick is to redefine what a "cell_row" is. Each row is currently a list of cells:

if truncate_cells and len(display_value) > truncate_cells:
display_value = display_value[:truncate_cells] + u"\u2026"
cells.append({"column": column, "value": display_value})
cell_rows.append(cells)

I can redefine the row (the cells variable in the above example) as a thing-that-iterates-cells (hence behaving like a list) but that also supports __getitem__ access for looking up cell values if you know the name of the column.

The goal was to support neater custom templates like this:

{% for row in display_rows %}
  <h2 class="scientist">{{ row["First_Name"] }} {{ row["Last_Name"] }}</h2>
  ...

This may be an argument for continuing to allow non-JSON-objects through to the HTML templates. Need to think about that a bit more.

@simonw
Copy link
Owner Author

simonw commented Nov 19, 2021

I'm going to try leaning into the asyncinject mechanism a bit here. One method can execute and return the raw rows. Another can turn that into the default minimal JSON representation. Then a third can take that (or take both) and use it to inflate out the JSON that the HTML template needs, with those extras and with the rendered cells from plugins.

@simonw
Copy link
Owner Author

simonw commented Nov 19, 2021

This may be an argument for continuing to allow non-JSON-objects through to the HTML templates. Need to think about that a bit more.

I can definitely support this using pure-JSON - I could make two versions of the row available, one that's an array of cell objects and the other that's an object mapping column names to column raw values.

@simonw
Copy link
Owner Author

simonw commented Nov 28, 2021

Two new requirements inspired by work on the datasette-table (and datasette-notebook) projects:

@simonw
Copy link
Owner Author

simonw commented Nov 28, 2021

I'm also going to use the new datasette-table Web Component to help guide the design of the new API, which relates directly to this issue too:

@simonw
Copy link
Owner Author

simonw commented Nov 28, 2021

Aside: is there any reason this work can't complete the long-running goal of merging the TableView and QueryView, such that most of the features available for tables become available for arbitrary queries too?

I had already mentally committed to implementing facets for queries, but I just realized that filters could work too - using either a CTE or a nested query.

Pagination is the one holdout here, since table pagination uses keyset pagination over a known order. But maybe arbitrary queries can only be paginated off you order them first?

@simonw
Copy link
Owner Author

simonw commented Nov 28, 2021

(I could experiment with merging the two tables by adding a temporary undocumented ?_sql= parameter to the in-progress table view that sets an alternative query instead of select cols from table - added bonus, this will force me to use introspection against the returned columns rather than mixing in the known columns for the specified table)

@simonw
Copy link
Owner Author

simonw commented Dec 10, 2021

If I break this up into @inject methods, what methods could I have and what would they do?

  • resolve_path: Use request path to resolve the database and table. Could handle hash URLs too (if I don't manage to extract those to a plugin) - would be nice if this could raise a redirect, but I think that will instead have to be one of the things it returns
  • build_sql: Builds the SQL query based on the querystring (and some DB introspection)
  • execute_count: Execute the count(*)
  • execute_rows: Execute the limit 101 to fetch the rows
  • execute_facets: Execute all requested facets (could this do its own asyncio.gather() to run facets in parallel?)
  • suggest_facets: Execute facet suggestions

Are there any plugin hooks that would make sense to execute in parallel? Actually there might be: I don't think extra_template_vars, extra_css_urls, extra_js_urls, extra_body_script depend on each other so it might be possible to execute them in a parallel chunk (at least any of them that return awaitables).

@simonw
Copy link
Owner Author

simonw commented Dec 12, 2021

I have a hunch that the conclusion of this experiment may end up being that the asyncinject trick is kinda neat but the code will be easier to maintain (while still executing in parallel) if it's written using asyncio.gather directly instead.

It's possible asyncinject will end up being neat enough that I'll want to keep it though.

@simonw
Copy link
Owner Author

simonw commented Dec 12, 2021

Rebuilding TableView from the ground up is proving not to be much fun. I'm going to explore starting the refactor of the existing code by separating out the bit that generates the SQL query from the rest of it.

@simonw
Copy link
Owner Author

simonw commented Dec 12, 2021

The tests for TableView are currently mixed in with everything else in tests/test_api.py and tests/html.py - might be good to split those out into test_table_html.py and test_table_api.py since they're such a key part of how Datasette works.

@simonw
Copy link
Owner Author

simonw commented Dec 12, 2021

I don't think this code is necessary any more:

# Ensure we don't drop anything with an empty value e.g. ?name__exact=
args = MultiParams(
urllib.parse.parse_qs(request.query_string, keep_blank_values=True)
)

That dates back from when Datasette was built on top of Sanic and Sanic didn't preserve those query parameters the way I needed it to:

# We roll our own query_string decoder because by default Sanic
# drops anything with an empty value e.g. ?name__exact=
args = RequestParameters(
urllib.parse.parse_qs(request.query_string, keep_blank_values=True)
)

@simonw
Copy link
Owner Author

simonw commented Dec 12, 2021

No, removing that gave me the following test failure:

tests/test_table_api.py::test_table_filter_queries[/fixtures/simple_primary_key.json?content__exact=-expected_rows2] FAILED                                       [100%]

=============================================================================== FAILURES ================================================================================
______________________________________ test_table_filter_queries[/fixtures/simple_primary_key.json?content__exact=-expected_rows2] ______________________________________

app_client = <datasette.utils.testing.TestClient object at 0x10d45d2d0>, path = '/fixtures/simple_primary_key.json?content__exact=', expected_rows = [['3', '']]

    @pytest.mark.parametrize(
        "path,expected_rows",
        [
            ("/fixtures/simple_primary_key.json?content=hello", [["1", "hello"]]),
            (
                "/fixtures/simple_primary_key.json?content__contains=o",
                [
                    ["1", "hello"],
                    ["2", "world"],
                    ["4", "RENDER_CELL_DEMO"],
                ],
            ),
            ("/fixtures/simple_primary_key.json?content__exact=", [["3", ""]]),
            (
                "/fixtures/simple_primary_key.json?content__not=world",
                [
                    ["1", "hello"],
                    ["3", ""],
                    ["4", "RENDER_CELL_DEMO"],
                    ["5", "RENDER_CELL_ASYNC"],
                ],
            ),
        ],
    )
    def test_table_filter_queries(app_client, path, expected_rows):
        response = app_client.get(path)
>       assert expected_rows == response.json["rows"]
E       AssertionError: assert [['3', '']] == [['1', 'hello'],\n ['2', 'world'],\n ['3', ''],\n ['4', 'RENDER_CELL_DEMO'],\n ['5', 'RENDER_CELL_ASYNC']]
E         At index 0 diff: ['3', ''] != ['1', 'hello']
E         Right contains 4 more items, first extra item: ['2', 'world']
E         Full diff:
E           [
E         -  ['1',
E         -   'hello'],
E         -  ['2',
E         -   'world'],
E            ['3',
E             ''],
E         -  ['4',
E         -   'RENDER_CELL_DEMO'],
E         -  ['5',
E         -   'RENDER_CELL_ASYNC'],
E           ]

/Users/simon/Dropbox/Development/datasette/tests/test_table_api.py:511: AssertionError

@simonw
Copy link
Owner Author

simonw commented Dec 12, 2021

Idea: in JSON output include a warnings block listing any _ parameters that were not recognized.

@simonw
Copy link
Owner Author

simonw commented Dec 16, 2021

Ran into a problem prototyping that hook up for handling ?_where= - that feature also adds a little bit of extra template context in order to show the interface for removing wheres - the extra_wheres_for_ui variable:

extra_wheres_for_ui = [
{
"text": text,
"remove_url": path_with_removed_args(request, {"_where": text}),
}
for text in request.args.getlist("_where")
]

Maybe change to this?

class FilterArguments(NamedTuple):
    where_clauses: List[str]
    params: Dict[str, Union[str, int, float]]
    human_descriptions: List[str]
    extra_context: Dict[str, Any]

That might be necessary for _search too.

@simonw
Copy link
Owner Author

simonw commented Dec 16, 2021

I managed to extract both _search= and _where= out using a prototype of that hook. I wonder if it could extract the complex code for ?_next too?

@simonw
Copy link
Owner Author

simonw commented Dec 16, 2021

simonw added a commit that referenced this issue Dec 16, 2021
New plugin hook, refs #473

Used it to extract the logic from TableView that handles _search and
_through and _where - refs #1518
simonw added a commit that referenced this issue Dec 17, 2021
- New `filters_from_request` plugin hook, closes #473
- Used it to extract the logic from TableView that handles `_search` and
`_through` and `_where` - refs #1518

Also needed for this plugin work: simonw/datasette-leaflet-freedraw#7
@simonw
Copy link
Owner Author

simonw commented Dec 17, 2021

These changes so far are now in the 0.60a0 alpha: https://github.com/simonw/datasette/releases/tag/0.60a0

@simonw
Copy link
Owner Author

simonw commented Dec 19, 2021

I sketched out a chained SQL builder pattern that might be useful for further tidying up this code - though with the new plugin hook I'm less excited about it than I was:

class TableQuery:
    def __init__(self, table, columns, pks, is_view=False, prev=None):
        self.table = table
        self.columns = columns
        self.pks = pks
        self.is_view = is_view
        self.prev = prev
        
        # These can be changed for different instances in the chain:
        self._where_clauses = None
        self._order_by = None
        self._page_size = None
        self._offset = None
        self._select_columns = None

        self.select_all_columns = '*'
        self.select_specified_columns = '*'

    @property
    def where_clauses(self):
        wheres = []
        current = self
        while current:
            if current._where_clauses is not None:
                wheres.extend(current._where_clauses)
            current = current.prev
        return list(reversed(wheres))

    def where(self, where):
        new_cls = TableQuery(self.table, self.columns, self.pks, self.is_view, self)
        new_cls._where_clauses = [where]
        return new_cls
        
    @classmethod
    async def introspect(cls, db, table):
        return cls(
            table,
            columns = await db.table_columns(table),
            pks = await db.primary_keys(table),
            is_view = bool(await db.get_view_definition(table))
        )
        
    @property
    def sql_from(self):
        return f"from {self.table}{self.sql_where}"

    @property
    def sql_where(self):
        if not self.where_clauses:
            return ""
        else:
            return f" where {' and '.join(self.where_clauses)}"

    @property
    def sql_no_order_no_limit(self):
        return f"select {self.select_all_columns} from {self.table}{self.sql_where}"

    @property
    def sql(self):
        return f"select {self.select_specified_columns} from {self.table} {self.sql_where}{self._order_by} limit {self._page_size}{self._offset}"

    @property
    def sql_count(self):
        return f"select count(*) {self.sql_from}"


    def __repr__(self):
        return f"<TableQuery sql={self.sql}>"

Usage:

from datasette.app import Datasette
ds = Datasette(memory=True, files=["/Users/simon/Dropbox/Development/datasette/fixtures.db"])
db = ds.get_database("fixtures")
query = await TableQuery.introspect(db, "facetable")
print(query.where("foo = bar").where("baz = 1").sql_count)
# 'select count(*) from facetable where foo = bar and baz = 1'

@simonw
Copy link
Owner Author

simonw commented Dec 22, 2021

I think I might be able to clean up a lot of the stuff in here using the render_cell plugin hook:

async def display_columns_and_rows(
self, database, table, description, rows, link_column=False, truncate_cells=0
):

The catch with that hook - https://docs.datasette.io/en/stable/plugin_hooks.html#render-cell-value-column-table-database-datasette - is that it gets called for every single cell. I don't want the overhead of looking up the foreign key relationships etc once for every value in a specific column.

But maybe I could extend the hook to include a shared cache that gets used for all of the cells in a specific table? Something like this:

render_cell(value, column, table, database, datasette, cache)

cache is a dictionary - and the same dictionary is passed to every call to that hook while rendering a specific page.

It's a bit of a gross hack though, and would it ever be useful for plugins outside of the default plugin in Datasette which does the foreign key stuff?

If I can think of one other potential application for this cache then I might implement it.

No, this optimization doesn't make sense: the most complex cell enrichment logic is the stuff that does a select * from categories where id in (2, 5, 6) query, using just the distinct set of IDs that are rendered on the current page. That's not going to fit in the render_cell hook no matter how hard I try to warp it into the right shape, because it needs full visibility of all of the results that are being rendered in order to collect those unique ID values.

@simonw
Copy link
Owner Author

simonw commented Dec 22, 2021

I think I can move this much higher up in the method, it's a bit confusing having it half way through:

# Handle ?_filter_column and redirect, if present
redirect_params = filters_should_redirect(special_args)
if redirect_params:
return self.redirect(
request,
path_with_added_args(request, redirect_params),
forward_querystring=False,
)
# If ?_sort_by_desc=on (from checkbox) redirect to _sort_desc=(_sort)
if "_sort_by_desc" in special_args:
return self.redirect(
request,
path_with_added_args(
request,
{
"_sort_desc": special_args.get("_sort"),
"_sort_by_desc": None,
"_sort": None,
},
),
forward_querystring=False,
)

@simonw
Copy link
Owner Author

simonw commented Dec 22, 2021

Also the whole special_args v.s. request.args thing is pretty confusing, I think that might be an older code pattern back from when I was using Sanic.

@simonw
Copy link
Owner Author

simonw commented Dec 22, 2021

New short-term goal: get facets and suggested facets to execute in parallel with the main query. Generate a trace graph that proves that is happening using datasette-pretty-traces.

@simonw
Copy link
Owner Author

simonw commented Dec 22, 2021

It looks like the count has to be executed before facets can be, because the facet_class constructor needs that total count figure:

for klass in facet_classes:
facet_instances.append(
klass(
self.ds,
request,
database,
sql=sql_no_order_no_limit,
params=params,
table=table,
metadata=table_metadata,
row_count=filtered_table_rows_count,
)

It's used in facet suggestion logic here:

if (
1 < num_distinct_values < row_count
and num_distinct_values <= facet_size
# And at least one has n > 1
and any(r["n"] > 1 for r in distinct_values)
):
suggested_facets.append(

@simonw
Copy link
Owner Author

simonw commented Dec 22, 2021

New short-term goal: get facets and suggested facets to execute in parallel with the main query. Generate a trace graph that proves that is happening using datasette-pretty-traces.

I wrote code to execute those in parallel using asyncio.gather() - which seems to work but causes the SQL run inside the parallel async def functions not to show up in the trace graph at all.

diff --git a/datasette/views/table.py b/datasette/views/table.py
index 9808fd2..ec9db64 100644
--- a/datasette/views/table.py
+++ b/datasette/views/table.py
@@ -1,3 +1,4 @@
+import asyncio
 import urllib
 import itertools
 import json
@@ -615,44 +616,37 @@ class TableView(RowTableShared):
         if request.args.get("_timelimit"):
             extra_args["custom_time_limit"] = int(request.args.get("_timelimit"))
 
-        # Execute the main query!
-        results = await db.execute(sql, params, truncate=True, **extra_args)
-
-        # Calculate the total count for this query
-        filtered_table_rows_count = None
-        if (
-            not db.is_mutable
-            and self.ds.inspect_data
-            and count_sql == f"select count(*) from {table} "
-        ):
-            # We can use a previously cached table row count
-            try:
-                filtered_table_rows_count = self.ds.inspect_data[database]["tables"][
-                    table
-                ]["count"]
-            except KeyError:
-                pass
-
-        # Otherwise run a select count(*) ...
-        if count_sql and filtered_table_rows_count is None and not nocount:
-            try:
-                count_rows = list(await db.execute(count_sql, from_sql_params))
-                filtered_table_rows_count = count_rows[0][0]
-            except QueryInterrupted:
-                pass
-
-        # Faceting
-        if not self.ds.setting("allow_facet") and any(
-            arg.startswith("_facet") for arg in request.args
-        ):
-            raise BadRequest("_facet= is not allowed")
+        async def execute_count():
+            # Calculate the total count for this query
+            filtered_table_rows_count = None
+            if (
+                not db.is_mutable
+                and self.ds.inspect_data
+                and count_sql == f"select count(*) from {table} "
+            ):
+                # We can use a previously cached table row count
+                try:
+                    filtered_table_rows_count = self.ds.inspect_data[database][
+                        "tables"
+                    ][table]["count"]
+                except KeyError:
+                    pass
+
+            if count_sql and filtered_table_rows_count is None and not nocount:
+                try:
+                    count_rows = list(await db.execute(count_sql, from_sql_params))
+                    filtered_table_rows_count = count_rows[0][0]
+                except QueryInterrupted:
+                    pass
+
+            return filtered_table_rows_count
+
+        filtered_table_rows_count = await execute_count()
 
         # pylint: disable=no-member
         facet_classes = list(
             itertools.chain.from_iterable(pm.hook.register_facet_classes())
         )
-        facet_results = {}
-        facets_timed_out = []
         facet_instances = []
         for klass in facet_classes:
             facet_instances.append(
@@ -668,33 +662,58 @@ class TableView(RowTableShared):
                 )
             )
 
-        if not nofacet:
-            for facet in facet_instances:
-                (
-                    instance_facet_results,
-                    instance_facets_timed_out,
-                ) = await facet.facet_results()
-                for facet_info in instance_facet_results:
-                    base_key = facet_info["name"]
-                    key = base_key
-                    i = 1
-                    while key in facet_results:
-                        i += 1
-                        key = f"{base_key}_{i}"
-                    facet_results[key] = facet_info
-                facets_timed_out.extend(instance_facets_timed_out)
-
-        # Calculate suggested facets
-        suggested_facets = []
-        if (
-            self.ds.setting("suggest_facets")
-            and self.ds.setting("allow_facet")
-            and not _next
-            and not nofacet
-            and not nosuggest
-        ):
-            for facet in facet_instances:
-                suggested_facets.extend(await facet.suggest())
+        async def execute_suggested_facets():
+            # Calculate suggested facets
+            suggested_facets = []
+            if (
+                self.ds.setting("suggest_facets")
+                and self.ds.setting("allow_facet")
+                and not _next
+                and not nofacet
+                and not nosuggest
+            ):
+                for facet in facet_instances:
+                    suggested_facets.extend(await facet.suggest())
+            return suggested_facets
+
+        async def execute_facets():
+            facet_results = {}
+            facets_timed_out = []
+            if not self.ds.setting("allow_facet") and any(
+                arg.startswith("_facet") for arg in request.args
+            ):
+                raise BadRequest("_facet= is not allowed")
+
+            if not nofacet:
+                for facet in facet_instances:
+                    (
+                        instance_facet_results,
+                        instance_facets_timed_out,
+                    ) = await facet.facet_results()
+                    for facet_info in instance_facet_results:
+                        base_key = facet_info["name"]
+                        key = base_key
+                        i = 1
+                        while key in facet_results:
+                            i += 1
+                            key = f"{base_key}_{i}"
+                        facet_results[key] = facet_info
+                    facets_timed_out.extend(instance_facets_timed_out)
+
+            return facet_results, facets_timed_out
+
+        # Execute the main query, facets and facet suggestions in parallel:
+        (
+            results,
+            suggested_facets,
+            (facet_results, facets_timed_out),
+        ) = await asyncio.gather(
+            db.execute(sql, params, truncate=True, **extra_args),
+            execute_suggested_facets(),
+            execute_facets(),
+        )
+
+        results = await db.execute(sql, params, truncate=True, **extra_args)
 
         # Figure out columns and rows for the query
         columns = [r[0] for r in results.description]

Here's the trace for http://127.0.0.1:4422/fixtures/compound_three_primary_keys?_trace=1&_facet=pk1&_facet=pk2 with the missing facet and facet suggestion queries:

image

@simonw
Copy link
Owner Author

simonw commented Dec 22, 2021

The reason they aren't showing up in the traces is that traces are stored just for the currently executing asyncio task ID:

# asyncio.current_task was introduced in Python 3.7:
for obj in (asyncio, asyncio.Task):
current_task = getattr(obj, "current_task", None)
if current_task is not None:
break
def get_task_id():
try:
loop = asyncio.get_event_loop()
except RuntimeError:
return None
return id(current_task(loop=loop))

This is so traces for other incoming requests don't end up mixed together. But there's no current mechanism to track async tasks that are effectively "child tasks" of the current request, and hence should be tracked the same.

https://stackoverflow.com/a/69349501/6083 suggests that you pass the task ID as an argument to the child tasks that are executed using asyncio.gather() to work around this kind of problem.

@simonw
Copy link
Owner Author

simonw commented Dec 11, 2023

Did this in the 1.0 alphas.

@simonw simonw closed this as completed Dec 11, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant