Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Various additions #2

Merged
merged 13 commits into from
May 28, 2024
Merged

Various additions #2

merged 13 commits into from
May 28, 2024

Conversation

micahmo
Copy link
Member

@micahmo micahmo commented Apr 24, 2024

This PR makes a few enhancements for the Thunder server.

  • Added a workflow to build Docker images. They are hosted right here on GitHub Container Registry. They should be triggered by any push or PR to any branch (so we can easily test things) but the ones based on the main branch will be tagged with latest so that they will automatically work for people pulling with no tag. (https://github.com/thunder-app/thunder_server/pkgs/container/thunder_server)
  • Added the ability to customize the Postgres hostname, port, username, password, and database. Added fallbacks in case any env vars are empty.
  • Create the Thunder database if it doesn't exist.
    • I'm having a really hard time making this work. The problem comes down to the fact that the Seqeulize connection is instantiated as part of the module import process, so there's no opportunity to do an async task like creating the db. I also couldn't get top-level await statements to work with a ripple effect of upgrading the modules target, etc., etc. Maybe you can help me with this part, or maybe we can just provide instructions for how to create a db.
  • Allow the service to start with the APN info, for people only interested in UnifiedPush.
  • Ability to generate a test notification (only works with UnifiedPush for now). See #1364.
  • Increase fetch timeout as I was having issues talking to ntfy.

I'm going to leave this PR as a draft until I can get the service up and running for myself! EDIT: Actually it looks like drafts aren't configured. 😕

@micahmo micahmo requested a review from hjiangsu April 24, 2024 19:54
@micahmo micahmo force-pushed the feature/docker-image-workflow branch from 2380584 to d503177 Compare May 10, 2024 17:43
@micahmo micahmo force-pushed the feature/docker-image-workflow branch from ae0356f to 86a3bcc Compare May 10, 2024 18:10
@micahmo micahmo force-pushed the feature/docker-image-workflow branch 15 times, most recently from bcd254e to 567bf41 Compare May 13, 2024 15:45
@micahmo micahmo force-pushed the feature/docker-image-workflow branch 2 times, most recently from 23234df to 8792458 Compare May 13, 2024 20:57
@micahmo micahmo changed the title Add docker build/push workflow Various additions May 13, 2024
@micahmo micahmo force-pushed the feature/docker-image-workflow branch from 8792458 to 2eb28cc Compare May 13, 2024 21:13
@micahmo
Copy link
Member Author

micahmo commented May 13, 2024

@hjiangsu Just tagging you on this to let you know I've gotten the server in a good enough place that I think I can daily drive UnifiedPush notifications! We'll see how things go. 😊


import { UnifiedPushObject } from "./../../types/unified_push_object";

setGlobalDispatcher(new Agent({connect: { timeout: 30_000 }}));
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I increased this timeout as it seems like my ntfy server is a little slower than the official one. However, it's concerning to me that a timeout can crash the whole server. Is there any way for us to automatically recover and/or retry? This is where my already non-existent node knowledge falls flat. 😊

@hjiangsu
Copy link
Member

Thanks for the additions, especially the docker workflow!

Is there any way for us to automatically recover and/or retry?

There's a few ways to do this! Typically, we can just add a try-catch on the function that caused the error. Then, we can create a queue (if needed) to automatically retry again. Or in our case, we can update the database table record (specifically, the timestamp attribute) so that the notification job catches it again at the next interval.

Could you provide a bit more details on the timeout issue you're encountering? I might be able to find an alternate solution here!

@micahmo
Copy link
Member Author

micahmo commented May 14, 2024

Typically, we can just add a try-catch on the function that caused the error.

What do you think about putting a try/catch around all of checkNotifications? Since that's sort of the "main thread" of the server, I'd hate for any exception in there to take us down. And yeah, as long as timestamp doesn't get updated until we've done the processing successfully, we should catch it the next time like you said!

Could you provide a bit more details on the timeout issue you're encountering? I might be able to find an alternate solution here!

Sure! I was getting this exception from fetch when pushing to my ntfy server.

TypeError: fetch failed
    at node:internal/deps/undici/undici:12502:13
    at processTicksAndRejections (node:internal/process/task_queues:95:5) {
  [cause]: ConnectTimeoutError: Connect Timeout Error
      at onConnectTimeout (node:internal/deps/undici/undici:6635:28)
      at node:internal/deps/undici/undici:6587:50
      at Immediate._onImmediate (node:internal/deps/undici/undici:6619:13)
      at processImmediate (node:internal/timers:478:21) {
    code: 'UND_ERR_CONNECT_TIMEOUT'
  }
}

@micahmo
Copy link
Member Author

micahmo commented May 14, 2024

I pushed another change which fixes the sort type when retrieving notifications. Because it wasn't sorted by new and limited to 50, it would miss new notifications and find nothing that was unread (for my "real" account that has >50 replies 😊).

@micahmo
Copy link
Member Author

micahmo commented May 15, 2024

I pushed another commit which uses the slimmer payload object for reply notifications.

@hjiangsu hjiangsu merged commit a85b511 into main May 28, 2024
2 checks passed
@hjiangsu hjiangsu deleted the feature/docker-image-workflow branch May 28, 2024 20:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants