-
-
Notifications
You must be signed in to change notification settings - Fork 64
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
/pin-clusters with redis (plus /heatmap) #574
Conversation
server/src/settings.example.cfg
Outdated
@@ -37,3 +37,7 @@ GITHUB_TOKEN = REDACTED | |||
ISSUES_URL = https://api.github.com/repos/hackforla/311-data/issues | |||
PROJECT_URL = REDACTED | |||
GITHUB_SHA = DEVELOPMENT | |||
|
|||
[Redis] | |||
ENABLED = True |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we default this to false?
Also is there a url for redis? Im guessing its defaulting to localhost:6379
In prod we will have a environment variable override for this
OOORRRRR we can base ENABLED
off of 'was there a redis url provided or not' that way we kill two birds with 1 stone
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah I can change that.
for the url, right now it's using 'redis' as the host, which I guess is the hostname of the redis service within the docker compose network...if I understand docker correctly. And the port is 6379. I'll figure out how to set this up as an env variable
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ohhh okay yeah...makes sense
Within the docker env itll be exposed as both redis:6379 and localhost:6379 so either way we want to be explicit with how we address redis so we can override it with heroku's fn8934fh3o8g3o3qhg893gg.heroku.com:6379
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok cool this is done
These perf results are awesome, gonna be a game changer for sure 🚀 |
Fixes #543 and #565 (the backend part)
This creates the
/pin-clusters
endpoint with redis support.I ran some jmeter tests comparing the existing
/pins
endpoint with the new/pin-clusters
endpoint. For the test I ran 3 separate api calls against each of the two endpoints:small: 1 NC
medium: 10 NCs
large: 50 NCs
All of the api calls used a 1-year date range (2019-01-01 through 2020-01-01) and asked for all 11 request types.
I disabled redis during the test, and proxied my localhost through an
ngrok
url to simulate live-server conditions.These were the results:
pins (small) vs pin-clusters (small)
pins (medium) vs pin-clusters (medium)
pins (large) vs pin-clusters (large)
So basically,
/pin-clusters
is faster than/pins
for all filters, and the advantage grows as the filters get broader. This makes sense, since the payload for/pin-clusters
stays essentially the same size, while the payload for/pins
grows with the breadth of the filters.Obvi
/pin-clusters
is still way too slow for medium and large requests. I'm pretty sure that's related to the inefficiency of IN queries (see #499) as well as the amount of time it takes to serialize rows from the database. I'm gonna look at that.UPDATE:
Added a
/heatmap
endpoint that takes the standard filters and returns an array of lat/longs to support the heatmap. It returns the same data as the/pins
endpoint, except it doesn't include thesrnumber
andrequesttype
. Also it uses the redis cache, and uses the same keys as the/pin-clusters
endpoint because it requires a subset of that data.dev
branchAny questions? See the getting started guide