Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug][Jira] issues disappearing from dataset, progressive load issue? #7826

Open
2 of 3 tasks
wouldd opened this issue Aug 2, 2024 · 38 comments
Open
2 of 3 tasks

[Bug][Jira] issues disappearing from dataset, progressive load issue? #7826

wouldd opened this issue Aug 2, 2024 · 38 comments
Assignees
Labels
severity/p1 This bug affects functionality or significantly affect ux type/bug This issue is a bug

Comments

@wouldd
Copy link
Contributor

wouldd commented Aug 2, 2024

Search before asking

  • I had searched in the issues and found no similar issues.

What happened

We have noticed in some of our graphs that sometimes we see a complete picture, and other times we're only seeing some fraction of the relevant jira issues.
In both cases this seems to impact jira projects that I know have quite a lot of issues, in once case >10k in the last year or so
When this was first brought up, I triggered a full-refresh , which did not seem to fix it, but then seemingly after another couple of 'normal' refreshes the issues did re-populate the db.
However now a couple of days later they've gone again. it seems mostly that I'm left with small numbers of issues from back at the begining of 2023, which makes me think maybe something is emptying out the previous data, kicking off process to work through re-import but then failing for some unknown reason.

What do you expect to happen

I expected the jira issues to consistently be present, they're still in jira under the board filter etc. So there is no obvious reason that they should disspear from devlake

How to reproduce

hard to say at this time. I'd suggest a long lived jira project with thousands of issues spread over several months is a good place to start. I guess run repeated refresh cycles and see if the data population in devlake is changing between them in ways it should not.

Anything else

so far i've seen this happen for 2 specific projects (out of about 25 that we sync. though it's possible that the same issue happens else wehre it's just less obvious in a graph.
not sure if it matters but some of our pipelines are alson syncing quite a few azure devops repos, so the project pipeline iself can take >6 hours to run.
I tried looking in the container logs but did not spot anything that looked like an error.
I'm happy to try running with additional debug or whatever if it would be helpful to assist in understanging this issue better.

Obviously it rather undermines the business faith in the graphs to have them suddenly underreporting by hundreds of missing jira issues.

Version

v1-custom

Are you willing to submit PR?

  • Yes I am willing to submit a PR!

Code of Conduct

@wouldd wouldd added the type/bug This issue is a bug label Aug 2, 2024
@dosubot dosubot bot added the severity/p1 This bug affects functionality or significantly affect ux label Aug 2, 2024
Copy link

dosubot bot commented Aug 2, 2024

Hello, @wouldd! I'm a bot that can help you solve bugs, answer questions, and become a contributor. I'm here to assist you while you wait for a human maintainer.

The disappearance of Jira issues from the dataset in DevLake, especially for projects with a large number of issues, could be due to the way the id field is constructed and used as a primary key. According to the schema, the id for Jira issues is composed of several parts, including the Jira source ID and the Jira issue ID. If there are inconsistencies or collisions in these IDs, it could lead to issues not being properly recorded or retrieved.

To resolve this, ensure that the id field is correctly and uniquely constructed for each Jira issue. Verify that the Jira source ID and Jira issue ID are correctly captured and concatenated to form a unique identifier for each issue. Additionally, check for any potential data truncation or transformation issues that might affect the uniqueness of the id field.

Here is the relevant part of the schema for reference:

| `id`                        | varchar  | 255        | An issue's `id` is composed of < plugin >:< Entity >:< PK0 >[:PK1]..." <ul><li>For Github issues, a Github issue's id is like "github:GithubIssues:< GithubIssueId >". Eg. 'github:GithubIssues:1049355647'</li> <li>For Jira issues, a Jira issue's id is like "jira:JiraIssues:< JiraSourceId >:< JiraIssueId >". Eg. 'jira:JiraIssues:1:10063'. < JiraSourceId > is used to identify which jira source the issue came from, since DevLake users can import data from several different Jira instances at the same time.</li></ul>                                                                    | PK      |

Ensure that the JiraSourceId and JiraIssueId are correctly and uniquely assigned to avoid any conflicts or data loss. Additionally, consider reviewing the domain layer schema and the data flow architecture to ensure that the data transformation processes are correctly handling the unique identifiers and not causing any data inconsistencies [1][2].

Furthermore, if you are using webhooks, ensure that each project has its own webhook. DevLake determines the project an issue or deployment belongs to by segregating between the webhooks. If a webhook is used by multiple projects, all issues or deployments published by that webhook will be replicated among those projects, leading to potential data inconsistencies. Each project should have its own dedicated webhook to avoid such issues [3][4][5].

For further troubleshooting, you can refer to the DORA Validation Dashboard in DevLake, which provides a step-by-step guide to check which step went wrong for all four DORA metrics. This can help you identify if the issue lies in the data mapping or querying process. Each chart in the dashboard has a hidden button in the top-right corner to access the context menu, where you can click Edit to view the SQL scripts behind the charts and understand how the data is queried [3][4][5].

If none of these solutions resolve your problem, consider creating an issue on the DevLake GitHub repository for further assistance.

To continue talking to Dosu, mention @dosu.

@wouldd
Copy link
Contributor Author

wouldd commented Aug 12, 2024

@klesh Any update on this? anything I can do to get more information about what is happening to help you run it down? this is happening on the regular for 2 specific projects, each time I can force a full refresh and the data returns, but one refresh later and much of the data for those projects 'dissapears'
This is causing understandable concern from the business that get entirely inacurate graphs.

Copy link

dosubot bot commented Aug 12, 2024

The issue you're experiencing with Jira issues disappearing is likely related to the progressive load mechanism in DevLake. This mechanism first deletes existing data and then inserts new data. If the deletion step completes but the insertion step fails, it can result in data loss [1].

To gather more information and troubleshoot further, you can:

  1. Check Logs: Look for any errors or warnings in the container logs during the refresh process. This can help identify if the insertion step is failing.
  2. Verify Unique IDs: Ensure that the id field for Jira issues is correctly composed and unique. Verify that the JiraSourceId and JiraIssueId are correctly set and unique across different Jira instances [2].
  3. Review Webhooks: Ensure each project has its own webhook. Using a single webhook for multiple projects can cause data inconsistencies [3].
  4. Use DORA Validation Dashboard: Utilize the DORA Validation Dashboard in DevLake to check which step went wrong for the DORA metrics. This can help identify issues in the data mapping or querying process [3].

By following these steps, you should be able to gather more information and potentially identify the root cause of the issue.

To continue talking to Dosu, mention @dosu.

@klesh
Copy link
Contributor

klesh commented Aug 13, 2024

@wouldd Sorry, no update.

I don't really understand what happened, nor do I have any theories at this point, the description is too broad and vague, and too little information to work with.

I need any of the following conditions to be fulfilled before I can do anything:

  • we can figure out how to reproduce the problem stably
  • there are crime scenes to be investigated
  • any theories to be verified by reading the code

@wouldd
Copy link
Contributor Author

wouldd commented Aug 19, 2024

@klesh I'd love to be able to provide more info. Is there anyway to explicitly capture logs for a particular pipeline ? I had a theory that the default of 'skipping failed steps' might be causing issues. the projects that seem to suffer have quite a lot of git repos configured for checking after the jira step. but I also need to understand more about the sequence of events in the process to guess at what's going wrong. is the jira task entirely self contained? or does something wait until all the other steps in the pipeline run?
Is there anyway to force a pipeline to just do a full refresh every time? it seems like choosing that option fixes it when it's started missing stuff, that might be a work around I could use for the ones that have this issue.

@klesh
Copy link
Contributor

klesh commented Aug 21, 2024

You can download the pipeline log by clicking the button shown in the screenshot below:

image

To address the issue, I suggest we start by identifying which issues are missing and look for any patterns. One approach could be to back up the database before each pipeline execution and compare it with the version where issues are missing. This could help us pinpoint where the discrepancies occur.

@wouldd
Copy link
Contributor Author

wouldd commented Aug 21, 2024

@klesh do those logs survive a pod restart? when I try downloading from a previously run pipeline I get a 0 length file
I have seen that there are some failures in pipelines, in a couple of cases I've hit jira api limits - jira returns a specific code and too many requests response - I'm assuming there is no backoff logic in the code to respond to that?
In terms of what is missing, it is hundreds of issues, in one case a whole jira project just dissapears from the graphs when this happens. I'm not sure how informative it's going to be to compare as you suggest. Can you point me to any particular logs to look for around the point that it empties the previous set of data? obviously something happens after it clears out data and before it repopulates, so if I can narrow in on the code/logs of that start point I can hopefully get a clearer picture.

@wouldd
Copy link
Contributor Author

wouldd commented Aug 22, 2024

a little more on this. exploring the db I see that if I query:
SELECT count() FROM devlake.issues WHERE original_project = 'Reporting'
I get 169 rows
however if I query
SELECT count(
) FROM devlake._raw_jira_api_issues WHERE params = '{"ConnectionId":1,"BoardId":319}'
which is the boardId that represents the isuses in that project I get 2001 rows
if I run the boards filter query in jira I get 1990 issues

this suggests to me that the collector is working (though I'm not sure why there are 11 unaccounted for raw data rows)
however something is going wrong in the conversion to populate the refined tables.
Does that help suggest anything I should look at?

@klesh
Copy link
Contributor

klesh commented Aug 26, 2024

@wouldd The logs should be available after a restart if the following settings are correctly configured in your docker-compose.yml
image
Feel free to adjust this further to fit your context!

_raw_jira_api_issues contains multiple versions from different points of time for any issue, it is expected that it has a greater total number of records than _tool_jira_issues and issues table.

Did all 169 issues come from the same Jira board? How many issues in the _tool_jira_issues for the board?

@wouldd
Copy link
Contributor Author

wouldd commented Aug 27, 2024

@klesh I'm deploying into kubernetes using your helm chart so I assume those settings would be correct there? I'll double check, but if you're not making a persistent volume claim then they probably disapear on node restart
_tool_jira_issues also had 169. it seemed like that's also populated after the raw data is collected. but possibly it's just that the raw table has multiple copies, I guess I can try and query for that? does the process log how many issues it finds under the board query? something I can search for in the log stream to narrow down where things start to go wrong?

@wouldd
Copy link
Contributor Author

wouldd commented Sep 2, 2024

@klesh somewhat related to this, I'm wondering if something can be done to avoid dropping all the previous data at the start of the refresh process? even if it works, I wind up with potentially a long period where a whole project just disappears from the graphs whilst that project refresh is running. It seems like it would be better to only replace lines that get updated, rather than block delete everything then refill. even a working system creates quite long periods where you cannot trust the graphs are showing an accurate picture.

@klesh
Copy link
Contributor

klesh commented Sep 2, 2024

@wouldd, I’m not particularly familiar with the Helm chart either, and you’re right about the persistent volume claim (AFAIK). By the way, do these issues belong to multiple boards?

@wouldd
Copy link
Contributor Author

wouldd commented Sep 2, 2024

@klesh Good question, I'm almost certain there will be multiple boards in existence that refrence the same tickets, but I'm not sure if it would be the case that they all belong to multiple boards that are being processed by devlake. I know some teams do have boards that pull from multiple projects so it's a distinct possibility. Would that cause problems?

@klesh
Copy link
Contributor

klesh commented Sep 4, 2024

@wouldd It could be, extractors and converters would wipe out all records of the board before populating the target table.
So, if those issues belong to another board and suddenly disappear for some reason from the JIRA side, they will be removed after syncing that board.

@wouldd
Copy link
Contributor Author

wouldd commented Sep 4, 2024

@klesh to be clear, nothing is disapearing on the jira side. the and if the board changes filter then the refresh refuses to run without a full refresh anyway.
I have noticed some times when I get a jira api 'too many requests' response, I'm not certain this is always correlated - however - is it possible that the process wipes all record of issues, then fails to repopulate from jira because of some api error which then leaves the dbs without that history? it's not entirely clear how this wiping works in the non-full refresh mode.
fwiw - I've stopped using the built in pipeline schedules and I'm currently testing with a script that always triggers full refresh to see if that helps - so far so good.

I'm still quite concerned around any logic that is wiping out data before loading new - this is happening on a refresh right now one of our bigger more important projects, there is only one Jira board associated and it's simple query, there are >60k issues all time in this project. as soon as a refresh starts this project completely disappears from the graphs, and remains gone for quite a long time whilst the refresh is running. given this could happen at any point in the day it causes concern with the business. Is it not possible to avoid this protracted period of having no data?

@klesh
Copy link
Contributor

klesh commented Sep 5, 2024

@wouldd I completely understand your concern because I share the same. Would it be possible to set up a new instance with a fresh database, focusing solely on the important board? This way, we can test whether the problem persists in that isolated environment.

@wouldd
Copy link
Contributor Author

wouldd commented Sep 5, 2024

@klesh we do have a test environment for devlake, but we're already hitting jira api ratelimits, I'm not sure about duplicating a large project, what would this achieve that we can't do in our main system? I'm still not able to get useful logs out which I think is the helm chart does not define any persistent volume claims so those logs do not last through a restart. that's going to be the same in our dev environment.
I will say that we're now on day 4 of me forcing only full refreshes ignoring the build in scheduler. and so far the only ongoing problem is that if you look at the graphs when a refresh kicks off the information vanishes until it gets far enough to have repopulated - I feel like that could be solved with some kind of transaction or temp table switching.
I guess you don't normally run this in kubernetes despite the helm chart?

@wouldd
Copy link
Contributor Author

wouldd commented Sep 5, 2024

@klesh so I think I may have finally tripped this scenario whilst slowly controlling the refreshes but also putting hte system under some load. I n this case the jira refresh starts, the database wipes the info, but then the collector fails because jira throws a 429 - too many requests, I'll paste the full error at the end.
I would expect in this scenario that it would not wipe out the database until the collector has been successful. By default pipelines skip failed tasks, which makes it hard to see when there has been a failure since the pipeline often just shows 'partial success' and if I'm looking after the daily node recycle then I can't get the logs. once this has happend the next 'normal' refresh would only collect issues that have updated since the last pipeline even though it essentially failed this critical step. Consequently once the data is gone, it never comes back without a full refresh running successfully
does that sound about right? basically any error in the collector during the pipeline leaves you with no data even if you had it before, and only a successful full refresh can restore it. That feels to me like a problem with sequencing and error handling?

that error is:
Error running pipeline 3708. Wraps: (2) Error running task 87040. | Wraps: (2) subtask collectIssues ended unexpectedly | Wraps: (3) Error waiting for async Collector execution | Wraps: (4) | | combined messages: | | { | | Retry exceeded 3 times calling agile/1.0/board/446/issue. The last error was: Http DoAsync error calling [method:GET path:agile/1.0/board/446/issue query:map[expand:[changelog] jql:[updated >= '2023/11/10 00:00' ORDER BY created ASC] maxResults:[100] startAt:[10700]]]. Response: <title>Oops - an error has occurred</title><script src='/static-assets/metal-all.js'></script><script type="text/javascript">document.body.className += " js-enabled";</script>

Something went wrong

Try reloading the page, then check our Statuspage for any current outages. If there are no relevant outages, create a support request so we can help you out.

If you create a request, include the following so we can help you as fast as possible:

  • Error type: 429 - Too many requests
  • Log reference: 6c7e45aa-0ff5-4723-9d3a-c511a60251d9
(429) | | ===================== | | Retry exceeded 3 times calling agile/1.0/board/446/issue. The last error was: Http DoAsync error calling [method:GET path:agile/1.0/board/446/issue query:map[expand:[changelog] jql:[updated >= '2023/11/10 00:00' ORDER BY created ASC] maxResults:[100] startAt:[10800]]]. Response: <title>Oops - an error has occurred</title><script src='/static-assets/metal-all.js'></script><script type="text/javascript">document.body.className += " js-enabled";</script>

Something went wrong

Try reloading the page, then check our Statuspage for any current outages. If there are no relevant outages, create a support request so we can help you out.

If you create a request, include the following so we can help you as fast as possible:

  • Error type: 429 - Too many requests
  • Log reference: cd93b151-b94b-4b8c-905c-4447e54291af
(429) | | } | Error types: (1) *hintdetail.withDetail (2) *hintdetail.withDetail (3) *hintdetail.withDetail (4) *errors.errorString Error types: (1) *hintdetail.withDetail (2) *errors.errorString

@klesh
Copy link
Contributor

klesh commented Sep 6, 2024

I normally use docker-compose for development, but our SaaS service does use the helm-chart and we have a centralized logging system so we don't need the persistent volume.

I don't think the 429 error is the cause of the data missing problem because incremental mode would keep all the previously collected data and the consecutive subtasks like extractors and convertors should be fine.

Honestly, I don't have enough material to investigate so I don't have any clue or how to proceed next.
Maybe we could arrange a free trial on https://www.devinsight.ai/ and see if we could reproduce your problem. if you are interested.

@wouldd
Copy link
Contributor Author

wouldd commented Sep 6, 2024

@klesh sadly the cloud offering won't be of much use to us. since you don't officially support on prem azure devops anyway.
in any case I had an occurance today of requesting a full refresh, which claimed to complete successfully, but yielded no results in the issues table. Since I was able to recover a log from this since I was able to get to it before the next node refresh
of note I think in this log is:
time="2024-09-06 11:27:18" level=info msg=" [pipeline service] [pipeline #3736] [task #88338] [extractIssues] extract Issues, connection_id=1, board_id=446"
time="2024-09-06 11:27:28" level=info msg=" [pipeline service] [pipeline #3736] [task #88338] [extractIssues] get data from _raw_jira_api_issues where params={"ConnectionId":1,"BoardId":446} and got 12768"

and

time="2024-09-06 11:45:27" level=info msg=" [pipeline service] [pipeline #3736] [task #88338] executing subtask convertIssues"
time="2024-09-06 11:50:10" level=info msg=" [pipeline service] [pipeline #3736] [task #88338] [convertIssues] finished records: 1"

It's not clear to me how it got 12768 issues from the board filter but converted 1 ? perhaps I'm misunderstanding what these logs show.

I can say that I am trying at the moment with a new board filter which excludes everything outside the last 365 days, since this particular project actually has >65k issue all time. but I really only care about the last year at most.

task-88338-2-1-jira.log
EDIT
I've also run this twice more, I updated the underlying board filter to reduce to only items created in the last year to reduce the total volume of data coming from jira,
board_filtered_previousyear_task-88343-2-1-jira.txt
which still shows no data, I then re-rean this using the re-transform option
re_transform_datatask-88348-2-1-jira.txt
The logs between these seem quite inconsistent to my eye, and it's hard to track what is really happening here. the net-net is that I still have only 1 item showing up in my graph for this project which certainly has thousands in the timeframe I'm including in queries.

@klesh
Copy link
Contributor

klesh commented Sep 9, 2024

Interesting, that was indeed very odd...
The subtask finished without error, which is very very strange.

@klesh
Copy link
Contributor

klesh commented Sep 9, 2024

How about your database setup? Is it an external database server?

@wouldd
Copy link
Contributor Author

wouldd commented Sep 9, 2024

@klesh yes it's a mysql (8.0) in amazon RDS

@klesh
Copy link
Contributor

klesh commented Sep 10, 2024

Weird, RDS should be fine.
What was your version again?

@wouldd
Copy link
Contributor Author

wouldd commented Sep 10, 2024

@klesh my version is approximately your v1 branch, with some customisation for the azure devops go plugin to support our internal server setup.
I did look at the latest jira plugin code and it doesn't look like anything has really changed in there since I last merged.
I'm currently looking at instrumenting the code with more diagnostics to try and help understand better what is going wrong here. but I'd love any suggestions you have on where I should be looking to understand why we don't seem to be getting as many inssues back from the jira api requests as I see in jira.

@klesh
Copy link
Contributor

klesh commented Sep 11, 2024

@wouldd I'm not entirely sure at this point. It might help if we cross-reference the code changes with the time the issue first appeared. Do you recall when this problem began?

@wouldd
Copy link
Contributor Author

wouldd commented Sep 11, 2024

@klesh the problem is not consistently happening (which is part of the problem) so I don't think it's something obviously co-inciding with code changes. rather I suspect a subtle timing condition based on the jira project itself and how things happen to run in the code.
I've been adding debug logging as best I can to flesh out my understanding of what's happening and I do wonder if there is a potential problem with the batch divider logic.
my understanding is that the code batches db writes by issue type into sets of 500 before they are then written in one go. The first time the code sees a given issue type it creates an empty batch to start using and at that point it calls delete on the database
image
I'm seeing quite a few deletes to the same raw database during the process and it's not clear to me that this is scoped. I'm wondering if there is a scenario in which data has been written by one batch when another is created and triggers a wipe of the data that was already written?

in general my observation is that the stricture of these raw data tables is forcing a situation whereby there is no unique identifier for given issue payload? maybe I'm misreading things but it would seem there would be no need to purge this table ahead of a full refresh if the id was based on the jira unique issue id, it would just be able to do a createorupdate which would mean you'd never have weird gaps when the data is dropped etc.

I will say that having instrumented the code and switched on debug logging I have not caught a failure scenario which could be bad luck or it could be that the act of logging more has shifted the timing a little to make it less of a problem

@klesh
Copy link
Contributor

klesh commented Sep 12, 2024

@wouldd

Wow, this might be the key issue!

It looks like the BatchSaveDivider could be accessed by multiple threads, and without any locking mechanism in place, it's highly likely that this is causing the problem.

Great catch—well done!

Would you be able to implement a locking mechanism and verify if this resolves the issue?

@wouldd
Copy link
Contributor Author

wouldd commented Sep 16, 2024

fwiw I have implemented a fix which I'm testing this week. I'm actually on holiday this week, but I'm leaving things running with my fork and I'll check next week to see if it survived without losing any data. we shall see.

@klesh
Copy link
Contributor

klesh commented Sep 17, 2024

Looking forward to it.

@d4x1
Copy link
Contributor

d4x1 commented Sep 27, 2024

@wouldd Is there any progress?

@wouldd
Copy link
Contributor Author

wouldd commented Sep 27, 2024

@d4x1 yes, my fork has been running for a couple of weeks now without dropping any issues so i'm happy that I've fixed the problem we were having. however to do so I made some changes to the core logic that require plugin changes and I obviously have only updated the two that I am using. I also need to merge some of the latest into my fork just to be properly up to date. but I didn't want to the rock the boat on my side before fixing the core issue.
not sure what the best route would be in terms of merging any kind of PR - if we were the only people seeing this issue then I can imagine my changes might be a little far reaching to be attractive. either way I have a lot going on work wise atm and won't have time for a week or so to prepare anything properly.

@d4x1
Copy link
Contributor

d4x1 commented Sep 27, 2024

@wouldd I am curious about what's wrong with the current code, you can give us some hints. :)

We can evaluate the priority of this bug. If it is emergency, we should fix it ASAP. If it's limited, you can feel free to submit your PR to fix it.

@wouldd
Copy link
Contributor Author

wouldd commented Sep 27, 2024

@d4x1 So I alluded to the observation in an earlier comment. the current implementation is designed in such a way that it must delete all the contents from the raw tables before populating again because it just uses randomly generated primary keys. So if anything goes wrong during the process then you can wind up without data. I'm not 100% certain but I think there are cases where an sql deadlock error in a batch save can cause a failure that gets swallowed.
So I updated the code to allow the plugins to specify the json path to the unqiue id of the object being retrieved from the remote system. Everything being pulled in raw data always has a pretty obvious unique value from that system. often it's called 'id' but I basically just set a CONST on all the plugin objects to define what it is for that objects payload and then used this value when storing data in the raw tables. that means it can always just do createorUpdate so regardless of any transitory issues I never run into problems that previously fetched data disappear.
I also put some explicitly deadlock detection/retry logic into the code around those createOrUpdate calls.
I will note that I also had to upgrade grafana because the version you were using had a bug around handling true unit64 values even though the tables were already defined that way.
I confess I never completely identified exactly the code path that was causing me problems, I just re-worked things to an architecture that seemed more appropriate to me and avoided the entire need to delete things as part of any refresh.

@d4x1
Copy link
Contributor

d4x1 commented Sep 29, 2024

@wouldd Thanks for your reply.
I think adding a custom id field in raw layer table is very shrewd. It can be a standalone feature.

You've made two significant improvement: unique id in raw table and deadlock retry. Can you disable one of these two improvement and see what will happen? It can find out which part is working.

As to to grafana, just feel free to upgrade it.(We also find some vulnerabilities and wait grafana team fixes them.)

@wouldd
Copy link
Contributor Author

wouldd commented Oct 7, 2024

@d4x1 Hi, I'm afraid this is a pretty busy time of year for me at work and I don't have a setup that would reasonably let me test these independently. the changes work for me and my users are happy, I'm not going to risk breaking that again. once the busy time is past I may be able to at least bring my fork inline with latest to make it easier to assess my changes as a potential feature etc.

@d4x1
Copy link
Contributor

d4x1 commented Oct 15, 2024

@d4x1 Hi, I'm afraid this is a pretty busy time of year for me at work and I don't have a setup that would reasonably let me test these independently. the changes work for me and my users are happy, I'm not going to risk breaking that again. once the busy time is past I may be able to at least bring my fork inline with latest to make it easier to assess my changes as a potential feature etc.

@wouldd take your time and we're not in hurry.

Copy link

This issue has been automatically marked as stale because it has been inactive for 60 days. It will be closed in next 7 days if no further activity occurs.

@github-actions github-actions bot added the Stale label Dec 15, 2024
@klesh klesh removed the Stale label Dec 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
severity/p1 This bug affects functionality or significantly affect ux type/bug This issue is a bug
Projects
None yet
Development

No branches or pull requests

3 participants