-
Notifications
You must be signed in to change notification settings - Fork 11
Sync manually #73
Comments
Thanks @MickL. The Can you await async functions in your particular serverless environment? If so, it might be worth trying a very low Try the above and feel free to re-open if the issue persists, or you spot anything else. Thanks! |
I would like to see this as a feature request and would like to ask you to reopen this issue. Wouldnt a sync function be integrated quickly? Await is no choice since the request has to be handled quickly. Instead at the end I want to send the response and after that keep the worker alive for sending the logs. There for i have a waitUntil function that wants a promise, which could be this new sync function. |
I think you can achieve what you're looking for using How about something like: // 'holder' for log promises...
const allLogs = [];
// later, wherever you're creating a log
allLogs.push(
timber.log("your log message here")
)
// finally, when you need to await...
event.waitUntil(Promise.all(allLogs)) This ought to work, since I think this is preferable to having a sync 'built in' to the lib, because then we'll need to internally keep logs around in memory to assess whether all promises have been fulfilled, which is potentially expensive on RAM for applications that are handling masses of logs and may not use this feature. |
This is not an option. Timber is used in all different kind of services and classes the waitUntil has no access to it. I mean you batch all logs anyway and send them all together. Why not have the sync function that is triggered by batch manually and reset the timer? |
I'm not familiar with the environment used by Cloudflare Worker, but it sounds like some kind of dependency injection pattern might be useful. In the short-term while waiting for changes to the lib, this kind of pattern might unblock you: class Logger {
constructor(logger) {
this.logs = [];
this.logger = logger;
}
// keep a copy of each promise
log = (...params) => this.logs.push(this.logger.log(...params))
// await this to know when all logs have been sent
sync = () => Promise.all(this.logs)
} Then you could create a singleton that is injected into other services/classes, like: // pass this to whichever services do the logging
const logger = new Logger(new Timber(optionsGoHere)) Then you'd just log as normal, but via logger.log(logDataGoesHere) And whenever you want to confirm that all logs have been persisted: await logger.sync()
It's actually slightly more involved than that, due to a combination of batching and throttling. Logs are batched and sent according to the There can be multiple requests in-flight at any one time, depending on how these parameters are set. For example, a Likewise, 10 logs fired over a 150ms interval would actually arrive in 2 batches of 5, since the 100ms upper limit for a 'collection window' of logs would have been exceeded and a request would have been sent. At the logger level, we're not currently tracking 'pending' logs. Instead, they're passed over to the throttle and batch helpers, which manage their pending workloads and ultimately resolve the log's original promise -- which ultimately bubbles up to the original We could make some changes to the core In the meantime, I'll re-open to track the feature and would propose a pattern such as the one above to keep you moving in the interim 👍 |
Thanks for the fast and detailed answer! This might not be the best solution, but it is a solution I could implement for now. Instead of the requested "sync manually now" function, a simple function "are there any unsynced logs?" that returns a promise that resolves when everything open is synced would be nice, too. |
👍 that's actually exactly what I had mind. Should be relatively simple to add; just wanted to make sure you weren't blocked in the meantime while waiting for it to land. |
That would be nice. In a serverless environment one probably want to totally disable throttled syncing and sync manually at the end of a request, ideally after the response has been already sent. At least for me it would not make sense to send logs in between the running process, and i cant think of a good batchInterval. If I set it too high, like 1s, the process has to wait a long time after it is finished. If I set it too small, like 50ms, logs are being sent while the process is still running and wasting performance. So it would be nice to have both:
|
Within a serverless environment the batchInterval is a problem: The script has already run and exited, but the logs have not been synced yet. Therefor it would be nice to have a function to sync manually if there are unsynced logs.
The text was updated successfully, but these errors were encountered: