-
Notifications
You must be signed in to change notification settings - Fork 151
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Persistent trip_updates when running getStopTimeUpdates #160
Comments
Can you share with me the config.json file you are using so I can try out your feed? |
Note that there are some additional variables I'm using as part of the integration with the rest of my app.
|
Thanks for sharing this. When you call You need to periodically run the GTFS-Realtime import script or the I realize that this wasn't very clear from the documentation, so just added some notes about this. Let me know if this solves your issue, or if there is something else. |
Yes, I do getStopTimeUpdates() each 20 seconds and I had to setup a caching mechanism because if in the next 20 seconds there was no update for a trip, the row on the SQLite was gone. Maybe this a behavior from an old node-gtfs version and not the case anymore? (so I don't need caching) |
Let me know if you still see this behavior - there is a mechanism to keep old data around while the new data is fetched and inserted into the DB using the is_updated field https://github.com/BlinkTagInc/node-gtfs/blame/master/lib/import.js#L728 - but I haven't used this feature extensively. |
Would this work even if the agency is sending empty RT updates for a trip? I suspect that's what might be happening. |
No - I think if that is the case they will all be removed from the database. I haven't heard of this type of issue with GTFS-Realtime before - do you know if this is a common problem, or do you think it just specific to the agency you are working with? |
I suspect this is specific to the agency, but not sure how to deal with it 😕 |
You could make your own function to do what |
I've been debugging the RT data and I think the issue is that this agency in each update is pushing only a few stop_sequences: For example, I'll get an update for a given trip_id with info for stop sequences 14 19 24 Then, once the route advances, they push another update with stop sequences 15 20 25, but only with them. What I see on the SQLite is that only 15 20 and 25 are present and 14 19 and 24 are lost. Does this has to do with cleanStaleRealtimedata? Is there a way I can manage/config when a RT update should be considered "stale"? I've been comparing the RT results with a local instance of OpenTripPlanner, and this one is able to catch and keep all RT updates for all stop sequences, using the same RT url to fetch them. So I suspect something is wrong on my node-gtfs project side. |
Additional clarification: The problem is also that there are gaps in the trip updates, providing some stop_sequences and not others, this is a problem at the agency feed level, not sure how gtfs-node can help with that 😕 |
Digging more on why OpenTripPlanner is able to provide RT updates even for gaps in stops sequences I read this:
Maybe those two things should come as a feature request on a new issue then? |
Thanks for looking into this issue of missing times - the #162 feature request should handle solving for interpolation and delay propagation. As for controlling when data is considered "stale" or persisting data after subsequent GTFS-Realtime data is fetched and stored into the database, it seems like this may not be a needed use case. |
Hi there,
Is there a way to keep trip_updates data persistent on the SQLite or make them persist/cached for a fixed period of time?
I've observed that my GTFS RT feed provides data in a very intermittent way, and I'm not getting the latest stop times updated when running getStopTimeUpdates().
Is there a recommended way to keep this info stored in the SQLite for at least some time (unless it changes by a new update to the trip)?
Thanks!
The text was updated successfully, but these errors were encountered: