Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix Unintuitive Behavior #573

Merged
merged 6 commits into from
Aug 8, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion src/dbos-executor.ts
Original file line number Diff line number Diff line change
Expand Up @@ -675,8 +675,9 @@ export class DBOSExecutor implements DBOSExecutorContext {
wCtxt.span.setStatus({ code: SpanStatusCode.OK });
} else {
// Record the error.
const e: Error = err as Error;
const e = err as Error & {dbos_already_logged?: boolean};
this.logger.error(e);
e.dbos_already_logged = true
if (wCtxt.isTempWorkflow) {
internalStatus.name = `${DBOSExecutor.tempWorkflowName}-${wCtxt.tempWfOperationType}-${wCtxt.tempWfOperationName}`;
}
Expand Down
5 changes: 4 additions & 1 deletion src/httpServer/server.ts
Original file line number Diff line number Diff line change
Expand Up @@ -311,7 +311,10 @@ async checkPortAvailability(port: number, host: string): Promise<void> {
oc.span.setStatus({ code: SpanStatusCode.OK });
} catch (e) {
if (e instanceof Error) {
oc.logger.error(e);
const annotated_e = e as Error & {dbos_already_logged?: boolean};
if (annotated_e.dbos_already_logged !== true) {
oc.logger.error(e);
}
oc.span.setStatus({ code: SpanStatusCode.ERROR, message: e.message });
let st = (e as DBOSResponseError)?.status || 500;
const dbosErrorCode = (e as DBOSError)?.dbosErrorCode;
Expand Down
2 changes: 1 addition & 1 deletion src/scheduler/scheduler.ts
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ export enum SchedulerMode {

export class SchedulerConfig {
crontab: string = '* * * * *'; // Every minute
mode ?: SchedulerMode = SchedulerMode.ExactlyOncePerInterval;
mode ?: SchedulerMode = SchedulerMode.ExactlyOncePerIntervalWhenActive;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if we should just get rid of the default, make people think which one they want. It's a pretty significant difference.

I wonder about communicator retries also... it's pretty fundamental.

Copy link
Contributor Author

@kraftp kraftp Aug 8, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To eliminate the breaking change, I'm going to revert the communicator retries change and instead log exceptions retried by communicators so they aren't swallowed.

Copy link
Contributor Author

@kraftp kraftp Aug 8, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the scheduled workflows, it never seems to have worked right anyways, so I'm fine changing the default and making sure it works as intended.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't want to have no default because that's an even bigger breaking change, and now this PR has no breaking changes at all (except to what's already broken).

}

////
Expand Down
3 changes: 2 additions & 1 deletion src/workflow.ts
Original file line number Diff line number Diff line change
Expand Up @@ -781,7 +781,8 @@ export class WorkflowContextImpl extends DBOSContextImpl implements WorkflowCont
try {
result = await commFn.call(clsInst, ctxt, ...args);
} catch (error) {
this.logger.error(error);
const e = error as Error
this.logger.warn(`Communicator error being automatically retried. Attempt ${numAttempts} of ${ctxt.maxAttempts}. ${e.stack}`);
span.addEvent(`Communicator attempt ${numAttempts + 1} failed`, { "retryIntervalSeconds": intervalSeconds, "error": (error as Error).message }, performance.now());
if (numAttempts < ctxt.maxAttempts) {
// Sleep for an interval, then increase the interval by backoffRate.
Expand Down
2 changes: 1 addition & 1 deletion tests/failures.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -179,7 +179,7 @@ class FailureTestClass {
return await ctxt.invoke(FailureTestClass).testSerialError(maxRetry);
}

@Communicator({ intervalSeconds: 1, maxAttempts: 2 })
@Communicator({ retriesAllowed: true, intervalSeconds: 1, maxAttempts: 2 })
static async testFailCommunicator(ctxt: CommunicatorContext) {
FailureTestClass.cnt++;
if (ctxt.retriesAllowed && FailureTestClass.cnt !== ctxt.maxAttempts) {
Expand Down