Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Data fetching,parsing and caching needs a rework #85

Open
Darkmift opened this issue May 30, 2023 · 0 comments
Open

Data fetching,parsing and caching needs a rework #85

Darkmift opened this issue May 30, 2023 · 0 comments

Comments

@Darkmift
Copy link
Contributor

So currently the flow is

Main Data generation and storing
1.fetching a readme file.
2.parsing for repos (owner/repoName) and companies (name) data.
3.making GQL request for every company and repo using the above
4.storing the results into redis as a single json

on every data request:

1.redis is queried for the data
2.if redis holds the data then its returned in response
3.the whole chain of "Main Data generation and storing" above is triggered then said data is returned

Main issue:
no fine search or diff managment
meaning dozens of GQL request everytime instead of merely tracking changes and requesting for the affected/new companies/repos
the rate limit for standart key can lead in inconsistent data display to client

**another major issue
Source readme needs to be optimized to avoid bad GQL calls.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant