Skip to content

Commit

Permalink
feat(triger): Add "time" triger
Browse files Browse the repository at this point in the history
  • Loading branch information
Dirreke committed Dec 2, 2023
1 parent de3d6c8 commit 7e5a1d5
Show file tree
Hide file tree
Showing 8 changed files with 349 additions and 11 deletions.
2 changes: 2 additions & 0 deletions Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@ compound_policy = []
delete_roller = []
fixed_window_roller = []
size_trigger = []
time_trigger = []
json_encoder = ["serde", "serde_json", "chrono", "log-mdc", "log/serde", "thread-id"]
pattern_encoder = ["chrono", "log-mdc", "thread-id"]
ansi_writer = []
Expand All @@ -41,6 +42,7 @@ all_components = [
"delete_roller",
"fixed_window_roller",
"size_trigger",
"time_trigger",
"json_encoder",
"pattern_encoder",
"threshold_filter"
Expand Down
36 changes: 32 additions & 4 deletions docs/Configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -171,10 +171,12 @@ my_rolling_appender:
The new component is the _policy_ field. A policy must have `kind` like most
other components, the default (and only supported) policy is `kind: compound`.

The _trigger_ field is used to dictate when the log file should be rolled. The
only supported trigger is `kind: size`. There is a required field `limit`
which defines the maximum file size prior to a rolling of the file. The limit
field requires one of the following units in bytes, case does not matter:
The _trigger_ field is used to dictate when the log file should be rolled. It
supports two types: `size`, and `time`. They both require a `limit` field.

For `size`, the `limit` field is a string which defines the maximum file size
prior to a rolling of the file. The limit field requires one of the following
units in bytes, case does not matter:

- b
- kb/kib
Expand All @@ -190,6 +192,32 @@ trigger:
limit: 10 mb
```

For `time`, the `limit` field is a string which defines the time to roll the
file. The limit field supports the following units(second will be used if the
unit is not specified), case does not matter:

- second[s]
- minute[s]
- hour[s]
- day[s]
- week[s]
- month[s]
- year[s]

> note: The log file will be rolled at the integer time. For example, if the
`limit` is set to `2 day`, the log file will be rolled at 0:00 every other a
day, regardless of the time `log4rs` was started or the log file was created.
This means that the initial log file will be likely rolled before the limit
is reached.

i.e.

```yml
trigger:
kind: time
limit: 7 day
```

The _roller_ field supports two types: delete, and fixed_window. The delete
roller does not take any other configuration fields. The fixed_window roller
supports three fields: pattern, base, and count. The most current log file will
Expand Down
21 changes: 21 additions & 0 deletions examples/sample_config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,28 @@ appenders:
filters:
- kind: threshold
level: info
file:
kind: file
path: "log/log.log"
encoder:
pattern: "[{d(%Y-%m-%dT%H:%M:%S%.6f)} {h({l}):<5.5} {M}] {m}{n}"
rollingfile:
kind: rolling_file
path: "log/log2.log"
encoder:
pattern: "[{d(%Y-%m-%dT%H:%M:%S%.6f)} {h({l}):<5.5} {M}] {m}{n}"
policy:
trigger:
kind: time
limit: 1 minute
roller:
kind: fixed_window
pattern: "log/old-log-{}.log"
base: 0
count: 2
root:
level: info
appenders:
- stdout
- file
- rollingfile
19 changes: 12 additions & 7 deletions src/append/rolling_file/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -167,12 +167,8 @@ impl Append for RollingFileAppender {
// TODO(eas): Perhaps this is better as a concurrent queue?
let mut writer = self.writer.lock();

let len = {
let writer = self.get_writer(&mut writer)?;
self.encoder.encode(writer, record)?;
writer.flush()?;
writer.len
};
let log_writer = self.get_writer(&mut writer)?;
let len = log_writer.len;

let mut file = LogFile {
writer: &mut writer,
Expand All @@ -182,7 +178,16 @@ impl Append for RollingFileAppender {

// TODO(eas): Idea: make this optionally return a future, and if so, we initialize a queue for
// data that comes in while we are processing the file rotation.
self.policy.process(&mut file)

//first, rotate
self.policy.process(&mut file)?;

//second, write
let writer_file = self.get_writer(&mut writer)?;
self.encoder.encode(writer_file, record)?;
writer_file.flush()?;

Ok(())
}

fn flush(&self) {}
Expand Down
3 changes: 3 additions & 0 deletions src/append/rolling_file/policy/compound/trigger/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,9 @@ use crate::config::Deserializable;
#[cfg(feature = "size_trigger")]
pub mod size;

#[cfg(feature = "time_trigger")]
pub mod time;

/// A trait which identifies if the active log file should be rolled over.
pub trait Trigger: fmt::Debug + Send + Sync + 'static {
/// Determines if the active log file should be rolled over.
Expand Down

0 comments on commit 7e5a1d5

Please sign in to comment.