-
-
Notifications
You must be signed in to change notification settings - Fork 298
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some tag albums do not load size_variants for images #1326
Comments
I am sorry to hear that. But if even you yourself cannot reproduce the problem for a new tag album, then it is going to be really difficult to track down this error. There is only a single option I see here, but it is up to you to decide if you are willing to go that way. it seems to be a bug in the backend which sends I don't see another chance to track down the problem. However, you have to decide if you are willing to share your DB with me or if it too privacy infringing for you. |
Yes I considered the null size_variants the main bug, the rest was included incase it was useful. I have no issue sending a dump of the db. What is the best way to send the file? It is 42MB. |
Can you somewhere put it on your web server and post me a download link? If you don't want to post the link here, then you can contact me privately on Gitter. Or you post the link here and I delete the post after I took it. You also need to tell me on what tag album a should take a look and as what user I need to login into Lychee. I only need the user name. The password is not required. I can overwrite the password on my one. |
OK, I have downloaded it, successfully imported your DB into mine and (finally) got to reset the password. The latter took me some time. |
Unfortunately, I can only report that I cannot reproduce your issue. It works for me. As you can see in this screenshot, my browser tries to download the size variants of type "small". Of course, this last step fails, because I don't have the media files, but the browser wouldn't be able to do so, if the JSON did not include the links to the size variants. And here is the screenshot with the JSON response: And thi is the jSON response truncated to first elements (click to expand me){
"id": "MQTPtB8HTwEjaZqGyAsQNuzs",
"show_tags": ["Alex"],
"max_taken_at": null,
"min_taken_at": null,
"thumb": {
"id": "kdQAigqTbenfget2WUJkk5xW",
"type": "image\/jpeg",
"thumb": "uploads\/thumb\/a7ccb66be0baead7b7b02e6d264ecede.jpeg",
"thumb2x": "uploads\/thumb\/[email protected]"
},
"photos": [{
"id": "nHveJr37HGcmmLYVNdMfkg3Q",
"created_at": "2022-05-14T13:58:44.278139+02:00",
"updated_at": "2022-05-14T13:59:02.711732+02:00",
"album_id": "bUlG9rrSCG3JAwgOPw_T8rvy",
"title": "2022-05-14-0036",
"description": "Phoenix, AZ",
"tags": ["Alex"],
"license": "CC-BY-NC-ND-4.0",
"is_public": 0,
"is_starred": false,
"iso": null,
"make": "Nikon",
"model": "LS-4000",
"lens": null,
"aperture": null,
"shutter": null,
"focal": null,
"latitude": null,
"longitude": null,
"altitude": null,
"img_direction": null,
"location": null,
"taken_at": "2022-05-01T12:00:00.000000-07:00",
"taken_at_orig_tz": "America\/Los_Angeles",
"type": "image\/jpeg",
"filesize": 0,
"checksum": "ae48142d99882e31278a162006511d91e418ad7a",
"original_checksum": "ae48142d99882e31278a162006511d91e418ad7a",
"live_photo_content_id": null,
"live_photo_checksum": null,
"live_photo_url": null,
"is_downloadable": true,
"is_share_button_visible": true,
"size_variants": {
"original": {
"type": 0,
"width": 3688,
"height": 5632,
"filesize": 8678834,
"url": "uploads\/big\/ae48142d99882e31278a162006511d91.jpeg"
},
"medium2x": {
"type": 1,
"width": 1414,
"height": 2160,
"filesize": 1088646,
"url": "uploads\/medium\/[email protected]"
},
"medium": {
"type": 2,
"width": 707,
"height": 1080,
"filesize": 262731,
"url": "uploads\/medium\/ae48142d99882e31278a162006511d91.jpeg"
},
"small2x": {
"type": 3,
"width": 471,
"height": 720,
"filesize": 111731,
"url": "uploads\/small\/[email protected]"
},
"small": {
"type": 4,
"width": 236,
"height": 360,
"filesize": 28713,
"url": "uploads\/small\/ae48142d99882e31278a162006511d91.jpeg"
},
"thumb2x": {
"type": 5,
"width": 400,
"height": 400,
"filesize": 46960,
"url": "uploads\/thumb\/[email protected]"
},
"thumb": {
"type": 6,
"width": 200,
"height": 200,
"filesize": 14952,
"url": "uploads\/thumb\/ae48142d99882e31278a162006511d91.jpeg"
}
},
"next_photo_id": "i6tbOqy9XkrFFTb2T5l18zYL",
"previous_photo_id": "Ql4Ef8AQPE-G83yz-lRivhNq"
}, {
"id": "i6tbOqy9XkrFFTb2T5l18zYL",
"created_at": "2022-05-14T13:58:43.241967+02:00",
"updated_at": "2022-05-14T13:59:02.697328+02:00",
"album_id": "bUlG9rrSCG3JAwgOPw_T8rvy",
"title": "2022-05-14-0035",
"description": "Phoenix, AZ",
"tags": ["Alex"],
"license": "CC-BY-NC-ND-4.0",
"is_public": 0,
"is_starred": false,
"iso": null,
"make": "Nikon",
"model": "LS-4000",
"lens": null,
"aperture": null,
"shutter": null,
"focal": null,
"latitude": null,
"longitude": null,
"altitude": null,
"img_direction": null,
"location": null,
"taken_at": "2022-05-01T12:00:00.000000-07:00",
"taken_at_orig_tz": "America\/Los_Angeles",
"type": "image\/jpeg",
"filesize": 0,
"checksum": "df972a27856fc4b20bc074cd2d4e72a65ba2982e",
"original_checksum": "df972a27856fc4b20bc074cd2d4e72a65ba2982e",
"live_photo_content_id": null,
"live_photo_checksum": null,
"live_photo_url": null,
"is_downloadable": true,
"is_share_button_visible": true,
"size_variants": {
"original": {
"type": 0,
"width": 3688,
"height": 5640,
"filesize": 9017034,
"url": "uploads\/big\/df972a27856fc4b20bc074cd2d4e72a6.jpeg"
},
"medium2x": {
"type": 1,
"width": 1412,
"height": 2160,
"filesize": 1129510,
"url": "uploads\/medium\/[email protected]"
},
"medium": {
"type": 2,
"width": 706,
"height": 1080,
"filesize": 271753,
"url": "uploads\/medium\/df972a27856fc4b20bc074cd2d4e72a6.jpeg"
},
"small2x": {
"type": 3,
"width": 471,
"height": 720,
"filesize": 115532,
"url": "uploads\/small\/[email protected]"
},
"small": {
"type": 4,
"width": 235,
"height": 360,
"filesize": 29599,
"url": "uploads\/small\/df972a27856fc4b20bc074cd2d4e72a6.jpeg"
},
"thumb2x": {
"type": 5,
"width": 400,
"height": 400,
"filesize": 48567,
"url": "uploads\/thumb\/[email protected]"
},
"thumb": {
"type": 6,
"width": 200,
"height": 200,
"filesize": 15323,
"url": "uploads\/thumb\/df972a27856fc4b20bc074cd2d4e72a6.jpeg"
}
},
"previous_photo_id": "nHveJr37HGcmmLYVNdMfkg3Q",
"next_photo_id": "GPjbt0B3KBKQm_iCStmEWZXb"
}, {
"id": "GPjbt0B3KBKQm_iCStmEWZXb",
"created_at": "2022-05-14T03:13:13.113101+02:00",
"updated_at": "2022-05-14T12:43:09.887221+02:00",
"album_id": "NpvFgSmSs1fqocFavGATgq_X",
"title": "2022-05-13-0034",
"description": "Red Cliffs, NV",
"tags": ["Alex"],
"license": "CC-BY-NC-ND-4.0",
"is_public": 0,
"is_starred": false,
"iso": null,
"make": "Nikon",
"model": "LS-4000",
"lens": null,
"aperture": null,
"shutter": null,
"focal": null,
"latitude": null,
"longitude": null,
"altitude": null,
"img_direction": null,
"location": null,
"taken_at": "2022-05-01T12:00:00.000000-07:00",
"taken_at_orig_tz": "America\/Los_Angeles",
"type": "image\/jpeg",
"filesize": 0,
"checksum": "a3b4f128860cb566927cd699e7ef19b0d6c15537",
"original_checksum": "a3b4f128860cb566927cd699e7ef19b0d6c15537",
"live_photo_content_id": null,
"live_photo_checksum": null,
"live_photo_url": null,
"is_downloadable": true,
"is_share_button_visible": true,
"size_variants": {
"original": {
"type": 0,
"width": 5632,
"height": 3688,
"filesize": 7582218,
"url": "uploads\/big\/a3b4f128860cb566927cd699e7ef19b0.jpeg"
},
"medium2x": {
"type": 1,
"width": 3299,
"height": 2160,
"filesize": 2253539,
"url": "uploads\/medium\/[email protected]"
},
"medium": {
"type": 2,
"width": 1649,
"height": 1080,
"filesize": 651860,
"url": "uploads\/medium\/a3b4f128860cb566927cd699e7ef19b0.jpeg"
},
"small2x": {
"type": 3,
"width": 1100,
"height": 720,
"filesize": 316462,
"url": "uploads\/small\/[email protected]"
},
"small": {
"type": 4,
"width": 550,
"height": 360,
"filesize": 90236,
"url": "uploads\/small\/a3b4f128860cb566927cd699e7ef19b0.jpeg"
},
"thumb2x": {
"type": 5,
"width": 400,
"height": 400,
"filesize": 65902,
"url": "uploads\/thumb\/[email protected]"
},
"thumb": {
"type": 6,
"width": 200,
"height": 200,
"filesize": 20629,
"url": "uploads\/thumb\/a3b4f128860cb566927cd699e7ef19b0.jpeg"
}
},
"previous_photo_id": "i6tbOqy9XkrFFTb2T5l18zYL",
"next_photo_id": "656pjb4jIMx_gstz3jS32wkt"
}, {
"id": "656pjb4jIMx_gstz3jS32wkt",
"created_at": "2022-05-14T03:13:07.534035+02:00",
"updated_at": "2022-05-14T03:13:59.452552+02:00",
"album_id": "NpvFgSmSs1fqocFavGATgq_X",
"title": "2022-05-13-0031",
"description": "Red Cliffs, NV",
"tags": ["Alex"],
"license": "CC-BY-NC-ND-4.0",
"is_public": 0,
"is_starred": false,
"iso": null,
"make": "Nikon",
"model": "LS-4000",
"lens": null,
"aperture": null,
"shutter": null,
"focal": null,
"latitude": null,
"longitude": null,
"altitude": null,
"img_direction": null,
"location": null,
"taken_at": "2022-05-01T12:00:00.000000-07:00",
"taken_at_orig_tz": "America\/Los_Angeles",
"type": "image\/jpeg",
"filesize": 0,
"checksum": "4bf5ec7a2678ca6fb386705e932900457174a389",
"original_checksum": "4bf5ec7a2678ca6fb386705e932900457174a389",
"live_photo_content_id": null,
"live_photo_checksum": null,
"live_photo_url": null,
"is_downloadable": true,
"is_share_button_visible": true,
"size_variants": {
"original": {
"type": 0,
"width": 5640,
"height": 3684,
"filesize": 8243463,
"url": "uploads\/big\/4bf5ec7a2678ca6fb386705e93290045.jpeg"
},
"medium2x": {
"type": 1,
"width": 3307,
"height": 2160,
"filesize": 2450353,
"url": "uploads\/medium\/[email protected]"
},
"medium": {
"type": 2,
"width": 1653,
"height": 1080,
"filesize": 700993,
"url": "uploads\/medium\/4bf5ec7a2678ca6fb386705e93290045.jpeg"
},
"small2x": {
"type": 3,
"width": 1102,
"height": 720,
"filesize": 336686,
"url": "uploads\/small\/[email protected]"
},
"small": {
"type": 4,
"width": 551,
"height": 360,
"filesize": 96499,
"url": "uploads\/small\/4bf5ec7a2678ca6fb386705e93290045.jpeg"
},
"thumb2x": {
"type": 5,
"width": 400,
"height": 400,
"filesize": 71212,
"url": "uploads\/thumb\/[email protected]"
},
"thumb": {
"type": 6,
"width": 200,
"height": 200,
"filesize": 22470,
"url": "uploads\/thumb\/4bf5ec7a2678ca6fb386705e93290045.jpeg"
}
},
"previous_photo_id": "GPjbt0B3KBKQm_iCStmEWZXb",
"next_photo_id": "CcjCgUHwqcO7K8affrREgdCC"
}]
} This looks good. But as you might see from my 2n screenshot, the debug panel of my browser says "Response truncated" (in German: "Antwort wurde gekürzt"). Even after I copied this truncated version into my text editor, it was 100k lines long! WTF?! The "truncated" JSON response had 2.5MB of size. This is huge considering that is only text. My graphical text editor even crashed when I tried to format the indention of the JSON response nicely, so I had to switch to Are you sure that your server is not just running out of resources, because there are so many photos in this tag album? I did run the SQL command directly on the DBMS CLI. There are 7384! In one album! I assume that maybe the PHP Otherwise I don't know anything else what we could do. |
The tag album should only have like 100 photos, maybe slightly more. |
I added a like 3 photos to it earlier and that seems to be what triggered the issue. |
if I do I get 1074 returned, which seems very high to me, but alot less than 7384. Spot checking the 1074 seems to indicate that they are correctly associated with the album. Checking for logs now. |
The lychee logs accessible from the side bar do not have anything, neither does the httpd logs. Any thoughts where else to check for them? The json response for me is 1.07MB. I have a hard time believing this is causing the machine to starve for resources. It also fails very quickly. ~1 second after request is fired. The http response code for the call is a 200. |
HTTP response code 200 means that it has not failed. If any exception had been thrown inside the backend the response code would be 4xx or 5xx (depending on the kind of error).
I don't know your machine. It was just an idea. BTW, ~1 second is not considered very quickly. Everything above >500ms is considered "slow" in modern web development. I don't necessarily share that opinion and Lychee won't be able to meet that limit for large albums. However, the magnitude 1s sounds plausible. I measured around 3.4s for the (full) response on my 7 year old desktop PC (cp. my screenshot). The size of your response is also reasonable, given the fact that your response does not include all the path strings which were included in the response which I observed. This error is really strange. I am using the exact same DB as you are with your DB dump, e.g. compare the IDs of the album and photos included in the response between your and my screenshots. We are using the exact same Lychee version (I confirmed to commit IDs). But you get an incomplete server response and there is nothing in your logs. The "biggest" difference that I can spot is that your are already running PHP 8.1 and I am still on PHP 8.0. But I highly doubt that this is the reason. I honestly don't have an idea how to track that problem down. There must be a difference between your setup and my test setup which we are missing here (besides the PHP version). As there is nothing in the logs, I slightly assume that the problem is outside of Lychee. That is why I came up with the resource idea. How is your setup? Is the web server provided by a shared hoster? Is it behind a reverse proxy? Anything else? Could we be experiencing a cashing problem here? I am only brain-storming right now. I see that you have installed the development dependencies and enabled the debug flag. So you should see the Laravel debug bar at the bottom. Which SQL queries are executed when you open the album? (It should be a handful) How many models are hydrated? Here are my results: Btw, 48 SQL query are approx. 30 too many. That is a regression bug I have to take care of. 😞 But this should not be the reason, because I see it, too. |
Is there any way to copy and paste the queries from the debug bar? I seem to be unable to in both Chrome and Firefox. Lychee is running on a linux server in my living room. No proxies or anything. I have no cache above lychee or httpd. |
Can you login into your database as the same DB user which is used by Lychee, run this query |
This is very similar to what @SerenaButler is experiencing. I used Serena DB to check, I also get the error. Those are my php.ini and mysql conf files. |
Have you also tried to run the query from within Laravel? I envision a small test script which we preliminary place as part of the other Diagnostic checks. This way we can ensure that the Laravel stack is used and the same PHP and MySQL configuration applies as for the operating system used which runs the web page. I will create a development branch and write such a script. However, @ildyria must run the script, because I cannot reproduce the problem. |
@ildyria I created the branch All you have to do is:
Possible results:
PS: Don't care that the automatic unit tests fail for the branch. The unit tests don't use the sample DB and hence the diagnostic has to fail. |
Its 16MB. |
If I run the mysql-1000=bug branch I get the error Looking in db, I cant find any reference to that id. No photos with that album_id, no albums, base_albums or tag_albums with that id. |
Because, the test only works with a special DB dump which @ildyria uses. It is expected not to work with your DB. See #1326 (comment):
|
I just realized this as I read your comment about failing unit tests.\ I did think that installing a dumb would essentially give you that db locally. Since my db experiences the issue, I thought it would work. |
|
This is great
As the error appears twice, this means it is not an error in the relationship layer. The last error is from a direct Eloquent query. Hence, we can already infer that the error happens somewhere between the Eloquent and the SQL layer. I will come up with a next test soon. |
Pushed the next diagnostic. @ildyria: Your turn. |
|
Interesting. Again a "double" error. So it is not a memory resource problem on the ORM layer. The results do not even leave the DBA layer. I will write a new test diagnostic tomorrow. Today I call it a day. |
I pushed a new update. I also created the command
|
Web:
CLI:
Same. |
Great. Then we can proceed with the CLI. I am currently trying to find out how to implement our own Laravel Currently, I suspect the lines return $this->processor->processSelect($this, $this->runSelect()); and return $this->connection->select($this->toSql(), $this->getBindings(), ! $this->useWritePdo); inside the Laravel framework to be hot candidates. But without useful output we won't get any further. Unfortunately, I cannot reproduce the problem which would make everything so much easier with a proper step debugger at hand. |
I preliminary give up here for the moment. As I cannot reproduce the problem and @ildyria has no proper step-debugger, I tried to extend the default Laravel classes for But I cannot get Laravel to use them. I tried to register them in \App\Providers\AppServiceProvider::register(), but Laravel ignores that and still uses the built-in classes. You can check so, if you run the command I posted a question on Stack Overflow: How to register an own implementation of |
@ildyria: I pushed more tests for the console command and managed to add instrumentation to the DB classes. Attention: The debug output prints your DB configuration including the DB user and password. So only use it with local development DB which is not reachable from the internet. @nagmat84's output where everything is fine looks like this: click to expand
|
That's weird, I can't reproduce it anymore, the three tests work fine. Looks like it was only a temporary issue for me, and after rebooting the problem is gone. I also couldn't find a way to reproduce. What I tried:
I'm sorry, but it looks like somebody else having this issue has to try it :( |
The Heisenberg bug is back again. 😟 |
I can properly access repo now, but it seems that my server has gone offline. I am away and will not be home to see what is wrong for next few days. I will do it once I bring server back online. Sorry for delay, this server being down is causing many other issues as well. |
@jln646v Do you have an estimate when you will have your server up and running again and when you might be able to run the script? |
I got back last night and server is up now. Just had to be manually powered on after power outage. Will run this in the next half hr. |
Here is @jln646v's output (click to expand)
|
Your output shows
three times. If the error would still be there this message should have been shown only twice and one failing test. Are you sure that the error still persists on your system? Can you open Lychee and go the the problematic album? I suspect that the error is gone after you rebooted your system due to the power failure. |
I confirmed the issue is still present. I was hopeful when I saw the output as well. |
Is your web-server using the same PHP version and PDO extension as your CLI? |
Can you somehow run the script through your web-server? |
Both are using php 8.1.7. I am working on executing script via apache now. |
Result from apache (click to expand)
|
It may be sufficient to mark up the output as fenced code block. Anyhow, I haven't looked closer into this issue so far, but I can image that it is caused by some memory corruption due to something else (that would explain the Heisenbug like behavior). If the issue happens again, please try to reproduce with OPcache disabled. |
I consistently have this issue and am still experiencing it. How do I disable the OPcache? |
First, check via |
I disabled opcache for php running under apache, and I still have the issue. The test script seems to yield the same results as before. |
It seems that we are slowly getting somewhere. @craigfrancis could reproduce the problem with @jln646v I will try to update the test branch today. I will ping you when I will have it finished. |
@jln646v It seems to be even easier. From your diagnostic output, I see that you are running MariaDB 10.6.7 which is affected by this bug: https://jira.mariadb.org/browse/MDEV-27937 Can you update to 10.6.8 and report back? The affected and unaffected versions are listed in the referenced upstream bug. As @cmb69 correctly pointed out there is no reason to implement a work around but just ask our user to update or downgrade to an unaffected version. |
A workaround is needed for users on hosted Environments with no chance to change anything :-) My Hoster is running v10.5.15-MariaDB-1:10.5.15+maria~focal-log btw. |
You can ask your provider. I don't think we should be spending time and effort for a problem that has already been fixed in a dependency, and only affects a very small number of users.
According to their bug tracker, 10.5.16 contains the fix (along with 24 security issues) and was released last month. |
Exactly, that is not going to happen. We cannot implement a work-around for every crazy bug in an upstream dependency. Otherwise our code will very soon become unmaintainable. We already have too many variations in the SQL queries for the different SQL dialects (SQLite, MySQL/MariaSQL and PostgreSQL). We cannot also take care of strange bugs in those implementations, especially not if they are Heisenberg-like memory corruption bugs which should not happen at all (the bug was rated "major" upstream). |
On that note, I'm going to close this. If upgrading to an appropriate MariaDB version doesn't fix it, feel free to reopen. |
No, but that being said, it might be an interesting thing to check in the Diagnostics (add a simple MariaDb version check). |
Yeah, we could, but at the moment I am still busy with PHPStan for Nested Set and I believe that in 6 month from now nobody will encounter this bug anymore as the patch has been applied to all actively supported versions of MariaDB. Hence, even someone running an LTS version of Debian or Ubuntu should get the patch in a timely manner. This means the usefulness of the diagnostic will only have a very limited time span. Personally, I don't feel like implementing it. |
I agree that checking the MariaDB version numbers is probably not a good idea. However, I think that we should add a meaningful diagnostic to our code when this bug occurs. From what we've seen, the |
Yes, when the bug is triggered, the DB server is so screwed up that it simply returns an empty result set for the entire query. This also yielded a
In theory I am fine with throwing an But I had it that way originally and I dimly remember that it was you (@kamil4) who told me that there are valid use cases for an original size variant being null. We even take care of this unusual condition in the Ghostbuster command. Hence, I am not entirely sure whether it is a good idea to assume that the original size variant is never null. |
I can't imagine a case where an original size variant would be missing in the DB, though it's getting late and my imagination may simply be lacking at the moment 😜. Perhaps I meant the Lines 439 to 446 in 7c8fa64
|
Confirming that upgrading to mariadb 10.8 solved the issue. Thanks for all your help with this. |
I LOVE my Hoster and can confirm the problem is gone :-) v10.5.16-MariaDB Thanks for all the effort. |
Detailed description of the problem [REQUIRED]
I have a tag album that seems to be getting null for all size_variants for all child images. I confirmed the size_variants are in db, and images load in their normal albums. This leads to an empty album being displayed. Not all tag albums are effected, and I am not able to reproduce this behavior in a new test tag album.
Sample of json response:
Console error msg:
Uncaught TypeError: _photo.size_variants.original is null
In DB:
MariaDB [lychee]> select * from size_variants where photo_id='GPjbt0B3KBKQm_iCStmEWZXb';
+-------+--------------------------+------+-------------------------------------------------+-------+--------+----------+
| id | photo_id | type | short_path | width | height | filesize |
+-------+--------------------------+------+-------------------------------------------------+-------+--------+----------+
| 63697 | GPjbt0B3KBKQm_iCStmEWZXb | 0 | big/a3b4f128860cb566927cd699e7ef19b0.jpeg | 5632 | 3688 | 7582218 |
| 63705 | GPjbt0B3KBKQm_iCStmEWZXb | 1 | medium/[email protected] | 3299 | 2160 | 2253539 |
| 63704 | GPjbt0B3KBKQm_iCStmEWZXb | 2 | medium/a3b4f128860cb566927cd699e7ef19b0.jpeg | 1649 | 1080 | 651860 |
| 63703 | GPjbt0B3KBKQm_iCStmEWZXb | 3 | small/[email protected] | 1100 | 720 | 316462 |
| 63701 | GPjbt0B3KBKQm_iCStmEWZXb | 4 | small/a3b4f128860cb566927cd699e7ef19b0.jpeg | 550 | 360 | 90236 |
| 63699 | GPjbt0B3KBKQm_iCStmEWZXb | 5 | thumb/[email protected] | 400 | 400 | 65902 |
| 63698 | GPjbt0B3KBKQm_iCStmEWZXb | 6 | thumb/a3b4f128860cb566927cd699e7ef19b0.jpeg | 200 | 200 | 20629 |
+-------+--------------------------+------+-------------------------------------------------+-------+--------+----------+
Steps to reproduce the issue
Steps to reproduce the behavior:
I am not sure what triggered this condition or what it is choking on.
Screenshots
If applicable, add screenshots to help explain your problem.
Output of the diagnostics [REQUIRED]
Browser and system
Firefox 100.0 Mac Os 12.3.1
The text was updated successfully, but these errors were encountered: