Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Write to SelfLog when the internal buffer size limit is hit #130

Open
akilin opened this issue Jan 19, 2020 · 4 comments
Open

Write to SelfLog when the internal buffer size limit is hit #130

akilin opened this issue Jan 19, 2020 · 4 comments

Comments

@akilin
Copy link

akilin commented Jan 19, 2020

I was playing around with dockerized seq, and noticed that it only saved a small part of total logs sent to it, without reporting any issues.
I am not sure if the issue is with seq or with the serilog sink.
Minimal repo with steps to reproduce can be found at the end of this post.

Log.Logger = new LoggerConfiguration()
    .WriteTo.File($"log{DateTime.UtcNow.ToString("yyyy-dd-M--HH-mm-ss")}.txt")
    .WriteTo.Seq("http://localhost:5341")
    .CreateLogger();

Serilog.Debugging.SelfLog.Enable(Console.Error);
int counter = 0;
while (counter < 1_000_000)
{
    Log.Information("count is {c} {g}", counter++, counter / 10_000);
}
Log.CloseAndFlush();

The code above when executed writes logs to file and seq.
Once the app is done running - file contains 1_000_000 rows as expected, while seq only has 12-20% of that.
select count(*) from stream => 185357 (this number varies between runs, but usually is between 120-200k).
No errors are thrown/written to console.
Diagnostics info can be found inside repo, file seq-diagnostics-8d79c5e01ed3e68.txt.

Steps to reproduce:
clone the repo https://github.com/akilin/demo-seq-losing-logs
startup seq in docker: docker-compose up -d
run the app: dotnet run

@nblumhardt
Copy link
Member

Thanks for posting this. The issue here is likely to be the default queue size limit in the Seq sink, configured by passing queueSizeLimit to WriteTo.Seq():

https://github.com/serilog/serilog-sinks-seq/blob/5282ccd8abafaba6f60c8188677843a24bf70216/src/Serilog.Sinks.Seq/SeqLoggerConfigurationExtensions.cs#L82

The default value is 100,000 - which is a good default for most apps, but not enough for the kind of synthetic case being tested here.

Passing a larger queue size, or sleeping for a millisecond or two on each iteration would resolve it.

The underlying batching mechanism doesn't currently write events to SelfLog when the queue is exhausted:

https://github.com/serilog/serilog-sinks-periodicbatching/blob/c4d521bad90cb96adf2a4900c03a9aab519631de/src/Serilog.Sinks.PeriodicBatching/Sinks/PeriodicBatching/PeriodicBatchingSink.cs#L250

This is because starting to emit a large number of logs via an infrequently-used channel when the system is under stress might itself lead to an outage. Writing a single warning event in this situation, using some synchronization to ensure it's only emitted once per application run, has been suggested in the past but so far is unimplemented.

Hope this helps!
Nick

@akilin
Copy link
Author

akilin commented Jan 20, 2020

Hi, thank you for quick and detailed response.

Passing a larger queue size, or sleeping for a millisecond or two on each iteration would resolve it.

Both these options are working, and I am seeing the same amount of logs in seq as there are in a file, so thank you.

I do think that some kind of warning mechanism should be in place, as the only reason I've noticed that logs were missing - is because the discrepancy was that high. If someone exceeds default limitations just barely - they will lose some logs, without any warning, and probably won't even know about it.

That said - my original issue is resolved, so feel free to close the ticket. Unless you want to keep is as a reminder to add that warning 😄

@nblumhardt nblumhardt changed the title Seq only saving around 15% of total logs sent to it Write to SelfLog when the internal buffer size limit is hit Jan 20, 2020
@nblumhardt
Copy link
Member

Thanks; I'll leave this open, for now, as you suggest 👍

@sspl-vssaini
Copy link

Yes! I, too, faced a similar kind of issue.

We have Seq deployed in the Docker container on Azure. One of our app deployed on Azure AppService use to send logs via Serilog to Seq.

Since I have configured using bufferBaseFilename, I can see that a few logs are missed in Seq. However, the missed records did present in the respective buffer file. The file got created as AppNameHere-20211014_008.clef with the size being 3937 KB.

Earlier I was assuming that the logs from this file would get pushed to Seq eventually. But it seems that the missed one never gets pushed to Seq.

So,

  • Can you please share what should be my next course of action to handle such a case?
  • Is it safe if I enable SelfLog on production?
  • How can I enable SelfLog to log into Azure logs?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants