-
-
Notifications
You must be signed in to change notification settings - Fork 6
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
long export time when the db is big #100
Comments
please let me know if am leaking info in this screenshot and if u can a way to prevent info leak when using the debug will be awesome because manually making sure and removing the URL is a pain. |
Can you test and see if #124 results to faster M3U returns? I've reworked the caching behind the scenes. It still should have a similar behavior where the cache needs to be built completely on first request. Also
|
base url is where the proxy server lives? |
Yep! Not the base URL of the source streams. |
ok let me test and report back |
every time u mentions a specific pr does it means it is also available in dev tag as well? |
Only if the PR has been merged. There are specific PRs where I try not to merge them immediately as it changes a lot of components. |
i have tried all possible combinations such as |
Well, the fact that you're seeing that log means that you're not using the image of the PR I mentioned. PRs that I mention will have a comment from a bot containing the image URL for that specific PR. For #124, it should be this comment. Use that image URL instead of the usual
To make things simpler for you, the base URL can be derived from the URL you use to access the generated M3U. For example, if you access the M3U with this URL: |
rest assured am trying the correct image and those ip are from the docker network on my debain 12 linux machine. i will share screenshot of everything once i get back home, thank you. |
Sure. Do double check as it is actually impossible for that PR image to return that log line. It doesn't exist in the code of that PR. |
I guess it's the workflow not doing what it's supposed not building the right image anymore in PRs for some reason. 🤦 I'll merge it to dev. You can test it from there instead. |
ok thanks |
Can you give me more context: what makes you say it doesn't work? What is the output when you request for the M3U url? |
it keeps loading in the browser or if i try to download the m3u via idm it never downloads. there is a little of cpu spike on the db container and proxy container (I believe) but that is about it. for example, the small channel sample am sing for debugging (~38 channels) works as expected, sync is quick and same goes for export but when u give it multiple url/providers that comes with vod, etc which all together is around 300mb in size then export doesn't seem to work. i dont the sync taking time the firs t time but the exports after that i would like to be quick. |
I did more work for this issue. I've merged the changes to the dev build if you want to try it out again. |
This comment was marked as outdated.
This comment was marked as outdated.
Please be patient and wait for everything to be processed to cache. We're talking about ~300mb worth of strings being processed. How I wish it is as easy as simply setting the "amount of CPU" to be used. That is just not how software works. Most of the process required for the proxy to work are single-threaded and cannot be parallelized. Also, do disable debugging mode if you want the maximum performance possible. Logging affects performance more than you might expect especially in cases like these. Logging is a single-threaded process which pretty much forces even the parallelized processes to wait for the log to be printed to the terminal before the next job is executed. The sorting is index-based and does not add any time complexity at all. Once everything is processed, the M3U will be stored in plain-text, both in memory and as a file. At that point, the only bottleneck would be your RAM speed and disk i/o. The function for time complexity will always be directly proportional to the amount of ingress data. |
I've just merged to dev more optimizations. This will probably be something that will improve over time as I fix other issues as a side effect. I won't be focusing on this anymore in the near future. Converting this to a discussion instead. |
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
this is in reference to #72 (comment) I was trying to capture an interesting line of the log that day but i could not do it, but i think i just bumped into by mistake the line is this
2024/08/23 22:57:17 [DEBUG] Cache miss. Retrieving streams from Redis...
not sure if this will be helpful or not.The text was updated successfully, but these errors were encountered: