-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
logging methods no longer take a callback, can't reliably use it in some environments (AWS Lambda) #1250
Comments
We’re using a different solution. |
Does not make it not an issue. Thanks for bringing it up. |
EDIT: this does not work, see #1250 (comment) for solution I'm not sure if this is the best way, nor whether it works in all cases, but so far we are handling this by doing the following: I've created a function waitForLogger(logger) {
return new Promise((resolve) => {
logger.on('close', resolve);
logger.close();
});
}
module.exports = waitForLogger; At the end of our handler, we wait for the function output before returning: const logger = require('./logger');
const waitForLogger = require('./wait-for-logger');
async function handler(event, context) {
// your functionality
logger.log('info', 'This should make it through all the transports');
// ...
await waitForLogger(logger);
} We are using the nodejs8.10 Lambda runtime - if you are using an older runtime, you can probably do something like the following: function handler(event, context, callback) {
// your functionality
logger.log('info', 'This should make it through all the transports');
// ...
waitForLogger(logger).then(() => callback(null));
} |
Thanks @dpraul. One of our transports logs to an external service, and that could take a few extra ms. Will |
@natesilva thought about this some more
Therefore in theory this should work, but I have not tested it yet: const winston = require('winston');
const logger = winston.createLogger({
transports: [
new winston.transports.Console()
]
});
process.on('exit', function () {
console.log('Your process is exiting');
});
logger.on('finish', function () {
console.log('Your logger is done logging');
});
logger.log('info', 'Hello, this is a raw logging event', { 'foo': 'bar' });
logger.log('info', 'Hello, this is a raw logging event 2', { 'foo': 'bar' });
logger.end(); |
Verified:
going to add an example with this as a way to resolve the issue. Please feel free to reopen if this does not work for you in Lambda. Functions-as-a-service platforms are 100% in Would be great if you could contribute an end-to-end AWS lambda example using |
I am reviewing this as we speak, and I believe I have verified neither of these methods to work. I'll try to share a full test-case here shortly. |
const winston = require('winston');
class CustomTransport extends winston.Transport {
log({ message }, cb) {
setTimeout(() => {
console.log(`custom logger says: ${message}`);
cb(null);
}, 3000);
}
}
const logger = winston.createLogger({
transports: [
new winston.transports.Console(),
new CustomTransport(),
],
});
logger.on('finish', () => {
console.log('done!');
});
logger.log('info', 'here is some content');
logger.end(); Expected output:
Actual output:
The |
Sorry, I deleted my comment. My understanding of these stream events is limited but I can see how |
I've gone through the same example with the Might there be some issue in how events are passed between the Regardless, @natesilva or @indexzero, you may want to reopen this issue. |
@dpraul that seems plausible, but I don't see where that would be:
Clearly there is something being missed in this flow. Going to chat with some Node.js folks and see what I can dig up. |
I still don't know what's causing the issue, but I found a workaround that waits for all the transports to finish! const winston = require('winston');
class CustomTransport extends winston.Transport {
log({ message }, cb) {
setTimeout(() => {
console.log(`custom logger says: ${message}`);
cb(null);
}, 3000);
}
}
const logger = winston.createLogger({
transports: [
new winston.transports.Console(),
new CustomTransport(),
],
});
async function waitForLogger(l) {
const transportsFinished = l.transports.map(t => new Promise(resolve => t.on('finish', resolve)));
l.end();
return Promise.all(transportsFinished);
}
logger.log('info', 'here is some content');
waitForLogger(logger).then(() => console.log('all done!')); Output:
It seems that |
Thanks for continuing to investigate @dpraul. The Going to verify with some folks. |
Thanks to some input from @mcollina @davidmarkclements & @mafintosh I learned that my understanding of
Going to explore how to either back propagate these events or expose an explicit method on @natesilva going to leave this issue open until the explicit fix is in, but in the meantime you should be able to use the technique above:
|
Important modules that might be handy: pump and end-of-stream. |
@mcollina theoretically from a Node.js streams perspective we should be able to implement |
@indexzero yes, that would be correct. |
Tell me pls, in winston@3 i can't use callbacks like in version 2?
And if not use callback my app closes faster than message was logged =( |
@TuxGit FYI this has been documented in |
Is there a suggested approach to handling this yet? I see there's an unmerged PR #1909 which looks to address it? |
After much digging, debugging and attempting to get the Loggly winston
plugin work in lambda...our solution was to write our own simple logger
that talks directly to the Loggly API and the console and dump Winston. :-P
Sorry, it's not currently open source, but it's separable from our product
so I could probably publish it if there's interest.
I was pretty disappointed this didn't work. I just couldn't find a way to
reliably block waiting for logging to complete with a timeout. We'd end up
getting errors instead when calling finish and logs would get dropped which
made it a mostly useless solution for lambda. You couldn't be sure if there
was an error and hence you didn't get logs or if the logs were just lost.
After rolling our own (which I usually try to avoid) it was clear there
were problems in the logging.
I'm not sure if the issue was in the Loggly Winston plugin in our case or
with Winston itself. But, it seemed to boil down to Winston. Either way, it
was disappointing.
…-Johnny
On Tue, Jun 8, 2021 at 11:41 AM Aidan Ayala ***@***.***> wrote:
Is there a suggested approach to handling this yet? I see there's an
unmerged PR #1909 <#1909> which
looks to address it?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#1250 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAQ2PCUCGERV4JYFZJIGXFDTRY2Y7ANCNFSM4EX6KBJA>
.
|
@dpraul I do not know enough about the stream/lambda internals. When the finish event comes before I attach this listener, will this promise ever be resolved? Based on your example, consider something like this (some long operation before async function lambdaHandler(event, context) {
const logger = winston.createLogger({
...
});
logger.log('info', 'some message');
...
await someThingThatTakesLongerThanEmittingLogs();
// Finishing logic
await waitForLogger(logger); // will this ever be resolved?
return {}
} or even simpler (no async function lambdaHandler(event, context) {
const logger = winston.createLogger({
...
});
// Finishing logic
await waitForLogger(logger); // will this ever be resolved?
return {}
} |
I'm not certain. At the time of writing (which was in 2019, so YMMV), the logger wouldn't fire the |
I know this is a long old thread but I will share how i could fix it. First, i want to thank @kasvtv and @dpraul for their shared approach that helped me define the following implementation: class AMQPTransport extends Transport {
constructor(opts) {
super(opts);
this.callsInProgress = new Map();
this.index = 0;
this.name = 'amqpTransport';
const { amqp } = opts;
const { exchange } = amqp;
this._rabbitOptions = {
appName: opts.appName,
url: amqp.url,
options: amqp.options,
exchange: {
name: exchange.name,
type: exchange.type,
options: exchange.options,
},
};
this.init();
}
async _final(callback) {
console.log('amqp final');
await Promise.all(this.callsInProgress.values());
await disconnect();
callback();
}
log(info, callback) {
const output = {
level: info.level,
message: info.message,
timestamp: info.timestamp,
};
const { appName, exchange } = this._rabbitOptions;
const i = this.index++;
const promise = publish(exchange.name, `logs.${output.level}.${appName}`, output);
this.callsInProgress.set(
i,
promise.finally(() => {
console.count('amqp logged');
this.callsInProgress.delete(i);
callback();
}),
);
}
init() {
const { url, options, exchange } = this._rabbitOptions;
createProducer(url, options, (channel) => {
return Promise.all([channel.assertExchange(exchange.name, exchange.type, exchange.options)]);
});
}
} The async _final(callback) {
console.log('amqp final');
await Promise.all(this.callsInProgress.values());
await disconnect();
callback();
} And in order to wait for the logger before closing the process, you can const waitForLogger = async () => {
const loggerFinished = new Promise((resolve) => winstonLogger.on('finish', resolve));
winstonLogger.end();
return loggerFinished;
} The I hope this helps! |
@darkwilly08 Thanks a ton for your contribution! I implemented @dpraul 's approach and ran it in production (AWS Lambda) after initial testing looked good. But yesterday I noticed that emitting a lot of small logs right after each other, then directly calling Your implementation of a local "queue" is simple, elegant and works out of the box. |
@darkwilly08 and all others I have a strange effect with this solution. I stress tested this solution by emitting 20 logs in a loop. If the logger queue ( Any idea how this is explainable? Is there some sort of "hard auto close timeout" by winston that closes the write stream after a certain amount of time after Some insights: async _final (callback) {
console.log('_final called. cIP length: ' + this.callsInProgress.size); // if this is >= 15, exception is thrown
await Promise.all(this.callsInProgress.values());
callback();
} This helps as a workaround async function waitForLogger (logger) {
const loggerDone = new Promise(resolve => logger.on('finish', resolve));
setTimeout(() => { // <--
logger.end();
}, 1000);
return loggerDone;
} Edit: I think, I found one more piece of the puzzle: Backpressuring. It seems like the Node.js readable-streams buffers 16 objects before flushing. I guess when I have more than 16 items in my queue, the stream needs to empty it's buffer more than once, but get's already closed before flushing the second portion and that one fails then. I've no idea though why... |
Hi @Tilogorn! I am without my laptop until next week. Then, I will update the code fixing the bug you found it. in the meantime you could try to move the callback call outside the finally method. just call it on the log function and maybe it works :finger-cross: |
That worked out of the box. 😮 Could you elaborate on that? log(info, callback) {
// ...
const promise = doSomethingAsyncThatReturnsPromise();
this.callsInProgress.set(
i,
promise.finally(() => {
this.callsInProgress.delete(i);
}),
);
callback();
} |
@Tilogorn I am glad to hear it! Well as you already discovered buffering more than 15 log callback is potentially dangerous, so if you call the callback right after create the promise - without wait for the result -, you are going to flush the stream buffer instantly and you never will (should) have 15 or more elements on the queue. On the other hand, the promise is still stored in the map to be able to wait for its resolution in the Finally, to keep the |
@darkwilly08 Thanks, I understand it now! Wrapping the callback inside |
You absolute legend! I've been battling this for weeks, but with limited activity in this repo, my hopes were dashed. I've been trying to implement Raygun as a custom transport, and it makes perfect sense, but it simply wouldn't work; I even got the Raygun team to do 2 years of updates to the Raygun library cos i thought that was the problem. This really needs to be documented properly! |
A PR that documents this would be greatly appreciated, in case anyone ever runs into this scenario again (I am sure that other people are using Lambda, etc.) |
I'll see if I have time on Monday whilst in work. Is there any particular reason the working solution has never been fixed properly? It'd be nice to have this work out of the box. |
No particular reason besides contributor (lack of) bandwidth :) PRs that make this work out of the box would be very welcomed; too :) |
So, I am getting the dreaded @darkwilly08 i tried your solution in my custom transport, but it's not working. Would be super great to get a full AWS Lambda example of usage with a custom transport |
OK, after much faffing, I think I have it working for my use case. I have an abstract class, where the logger is a class property. getLogger = () => {
if (this.options?.enableWinston && !this.logger?.writable) {
this.logger = this.createLogger();
if (this.options.raygunApiKey) {
this.logger.add(
new RaygunTransport(
{
apiKey: this.options.raygunApiKey,
tags: this.options.raygunTags,
},
{
level: 'error',
},
),
);
}
}
return this.logger;
};
createLogger = () => {
return winston.createLogger({
level: 'debug',
exitOnError: false,
handleExceptions: false, // not set previously, but set now, docs don't really help me understand what disabling it does?
handleRejections: false, // not set previously, but set now, docs don't really help me understand what disabling it does?
format: winston.format.combine(
winston.format.errors({cause: true, stack: true}),
winston.format.json(),
),
transports: [
new winston.transports.Console(),
],
});
};
closeLogger = async () => {
if (this.logger) {
if (!this.logger.writable) {
return Promise.resolve();
}
const loggerDone = new Promise(resolve => this.logger?.on('finish', resolve));
this.logger.end();
return loggerDone;
}
return;
}; In my custom raygun transport class, I am using the solution from @darkwilly08 async _final(callback: () => void) {
console.log('raygun final');
await Promise.all(this.callsInProgress.values());
callback();
}
async log(info: any, callback: () => void) {
const error = new Error(info.message, {
cause: new Error(info.stack),
});
const i = this.index++;
const raygun = this.raygunClient.send(error); // previously called via `async` but now uses the `finally()` promise chain method below. Not sure what will happen if this function fails?
this.callsInProgress.set(
i,
raygun.finally(() => {
this.callsInProgress.delete(i);
}),
);
callback();
} |
Nice! If it seems like others are going to hit this issue, it would be great to add some documentation or examples to the readme, so users don’t just google and find this closed issue :) But whatever you think is best. Thanks for your patience and continued efforts, glad to see you got this working. |
Yup, I'll try and do a PR to update the docs. But it'd still be nice to get a library native solution for this but I have no idea what that would look like. |
Back to square one for me. Refactored my logging into its own class, so I can keep it separate from other work. Implemented the various methods mentioned above, and now I get the dreaded "write after end" error again. |
So moving it to a separate class caused it to break? That seems suspicious... |
Looks like its AWS related and how it reuses code after subsequent calls of the function. Trying to find the best place to initialise my logger class. Here is the class, its not pretty, and im sure its not correct. The issue is I want to be able to call I just want to initialise the logger, use it, close it afterwards, rinse and repeat for subsequent lambda invocations. logger import { createLogger, format, transports } from 'winston';
import { RaygunTransport } from './Transports/RaygunTransport.js';
export class Logger {
logger;
options;
constructor(options) {
if (options) {
this.options = options;
}
this.logger = this.getLogger();
}
error = (...args) => {
return this.getLogger().error([...args]);
};
warn = (...args) => {
return this.getLogger().warn([...args]);
};
help = (...args) => {
return this.getLogger().help([...args]);
};
data = (...args) => {
return this.getLogger().data([...args]);
};
info = (...args) => {
return this.getLogger().info([...args]);
};
debug = (...args) => {
return this.getLogger().debug([...args]);
};
prompt = (...args) => {
return this.getLogger().prompt([...args]);
};
http = (...args) => {
return this.getLogger().http([...args]);
};
verbose = (...args) => {
return this.getLogger().verbose([...args]);
};
input = (...args) => {
return this.getLogger().input([...args]);
};
silly = (...args) => {
return this.getLogger().silly([...args]);
};
getLogger = () => {
if (!this.logger?.writable) {
this.logger = this.createLogger();
if (this.options?.raygunApiKey) {
this.logger.add(new RaygunTransport({
apiKey: this.options.raygunApiKey,
tags: this.options.raygunTags,
}, {
level: 'error',
handleExceptions: false,
handleRejections: false,
}));
}
}
return this.logger;
};
createLogger = () => {
return createLogger({
level: this.options?.logLevel ?? 'debug',
exitOnError: false,
format: format.combine(format.errors({ cause: true, stack: true }), format.json()),
transports: [
new transports.Console({
handleExceptions: true,
handleRejections: true,
}),
],
});
};
closeLogger = async () => {
if (this.logger) {
if (!this.logger.writable) {
return Promise.resolve();
}
const loggerDone = new Promise((resolve) => {
return this.logger?.on('finish', resolve);
});
this.logger.end();
return loggerDone;
}
return;
};
} this all came about as i needed a way to pass some custom data to the raygun tranport. I can't pass it at the point of creating the logger, as the data isn't available, so i have to set it afterwards, which, because its a tranports, is a ball ache. |
Hi @DABH @paul-uz! I did some tests: I created a repository with different transport classes to test the issue.
If you check each transport class, you will see that the implementation is almost the same. Extending from The idea of this tests is to understand a bit more where the issue is. I do not have problem to open a PR, but right now I am not sure what is the best approach to fix it. Sorry for the long message, I hope in some way it can help to discuss the issue. |
I think I have a working implementation, based on your code examples. My issue is that I'm struggling to pass custom data to the custom transport. My other issue with my custom transport is that the stack trace it reports is not from the error that caused the log but of the transport code itself. |
I'm a bit confused by all of this. Am I correct in thinking that the best way to write an async transport is to not use the winston transport class and just use a writable stream? What is the current best solution for a custom async tranport for use in AWS Lambda? |
No, no chance. I will keep using winston-transport. Following the example repository I would use asyncLogger.js removing I made some examples without winston-transport in order to try to understand better where the issue is. |
With that asyncLogger, do you still need to have a helper function to close the logger after your handler code is done? |
Did you mean something like this? Yes, it is still necessary. |
Are there any transports, like console, for example, where this problem doesn't happen? Having only used winston with the console transport in lambda, I don't think I've ever seen a missed log despite not performing any explicit handling. |
In Winston 3, the logging functions no longer take a callback. Previously this could be used to wait until logging had completed:
Winston 3 does allow you to listen to the
logged
event on your transports, but that doesn't give me any easy way to tell when this message that I am currently logging, has completed. Since everything is async the nextlogged
event that occurs after you write your message may not be related to the message you just logged. And it's more complex if you are listening for the event on multiple transports.That makes Winston difficult to use in certain environments. For example, in AWS Lambda we set the
callbackWaitsForEmptyEventLoop
parameter tofalse
(documentation). (sidenote: you do this if you have a database connection or something that cannot beunref
ed, because otherwise Lambda will never freeze your process). If set tofalse
, Lambda freezes your process as soon as you return results to the caller and may even terminate your process. If your logging transport hasn't finished writing by the time that happens, then you either lose logs or (worse) the logs get written later when Lambda unfreezes your process.TL;DR: Our Lambda process would normally await (or the callback equivalent) on Winston before returning results to the caller, guaranteeing that logging is done before the process is frozen.
Is there another way you'd recommend to detect when logging is complete?
The text was updated successfully, but these errors were encountered: