Action plugins
add_file_name
It adds a field containing the file name to the event.
It is only applicable for input plugins k8s and file.
More details...
add_host
It adds field containing hostname to an event.
More details...
convert_date
It converts field date/time data to different format.
More details...
convert_log_level
It converts the log level field according RFC-5424.
More details...
convert_utf8_bytes
It converts multiple UTF-8-encoded bytes to corresponding characters.
Supports unicode (\u... and \U...), hex (\x...) and octal (\{0-3}{0-7}{0-7}) encoded bytes.
Note: Escaped and unescaped backslashes are treated the same.
For example, the following 2 values will be converted to the same result:
\x68\x65\x6C\x6C\x6F and \\x68\\x65\\x6C\\x6C\\x6F => hello
Examples
pipelines:
example_pipeline:
...
actions:
- type: convert_utf8_bytes
fields:
- obj.field
...
The original event:
{
"obj": {
"field": "\\xD0\\xA1\\xD0\\x98\\xD0\\xA1\\xD0\\xA2\\xD0\\x95\\xD0\\x9C\\xD0\\x90.xml"
}
}
The resulting event:
{
"obj": {
"field": "СИСТЕМА.xml"
}
}
The original event:
{
"obj": {
"field": "$\\110\\145\\154\\154\\157\\054\\040\\146\\151\\154\\145\\056\\144!"
}
}
The resulting event:
{
"obj": {
"field": "$Hello, file.d!"
}
}
The original event:
{
"obj": {
"field": "$\\u0048\\u0065\\u006C\\u006C\\u006F\\u002C\\u0020\\ud801\\udc01!"
}
}
The resulting event:
{
"obj": {
"field": "$Hello, 𐐁!"
}
}
The original event:
{
"obj": {
"field": "{\"Dir\":\"C:\\\\Users\\\\username\\\\.prog\\\\120.67.0\\\\x86_64\\\\x64\",\"File\":\"$Storage$\\xD0\\x9F\\xD1\\x80\\xD0\\xB8\\xD0\\xB7\\xD0\\xBD\\xD0\\xB0\\xD0\\xBA.20.tbl.xml\"}"
}
}
The resulting event:
{
"obj": {
"field": "{\"Dir\":\"C:\\\\Users\\\\username\\\\.prog\\\\120.67.0\\\\x86_64\\\\x64\",\"File\":\"$Storage$Признак.20.tbl.xml\"}"
}
}
More details...
debug
It logs event to stderr. Useful for debugging.
It may sample by logging the first N entries each tick.
If more events are seen during the same interval,
every thereafter message is logged and the rest are dropped.
For example,
- type: debug
interval: 1s
first: 10
thereafter: 5
This will log the first 10 events in a one second interval as-is.
Following that, it will allow through every 5th event in that interval.
More details...
decode
It decodes a string from the event field and merges the result with the event root.
If one of the decoded keys already exists in the event root, it will be overridden.
More details...
discard
It drops an event. It is used in a combination with match_fields/match_mode parameters to filter out the events.
An example for discarding informational and debug logs:
pipelines:
example_pipeline:
...
actions:
- type: discard
match_fields:
level: /info|debug/
...
More details...
flatten
It extracts the object keys and adds them into the root with some prefix. If the provided field isn't an object, an event will be skipped.
Example:
pipelines:
example_pipeline:
...
actions:
- type: flatten
field: animal
prefix: pet_
...
It transforms {"animal":{"type":"cat","paws":4}} into {"pet_type":"b","pet_paws":"4"}.
More details...
hash
It calculates the hash for one of the specified event fields and adds a new field with result in the event root.
Fields can be of any type except for an object and an array.
More details...
join
It makes one big event from the sequence of the events.
It is useful for assembling back together "exceptions" or "panics" if they were written line by line.
Also known as "multiline".
⚠ Parsing the whole event flow could be very CPU intensive because the plugin uses regular expressions.
Consider match_fields parameter to process only particular events. Check out an example for details.
Example of joining Go panics:
pipelines:
example_pipeline:
...
actions:
- type: join
field: log
start: '/^(panic:)|(http: panic serving)/'
continue: '/(^\s*$)|(goroutine [0-9]+ \[)|(\([0-9]+x[0-9,a-f]+)|(\.go:[0-9]+ \+[0-9]x)|(\/.*\.go:[0-9]+)|(\(...\))|(main\.main\(\))|(created by .*\/.*\.)|(^\[signal)|(panic.+[0-9]x[0-9,a-f]+)|(panic:)/'
match_fields:
stream: stderr // apply only for events which was written to stderr to save CPU time
...
More details...
join_template
Alias to join plugin with predefined fast (regexes not used) start and continue checks.
Use do_if or match_fields to prevent extra checks and reduce CPU usage.
Example of joining Go panics:
pipelines:
example_pipeline:
...
actions:
- type: join_template
template: go_panic
field: log
do_if:
field: stream
op: equal
values:
- stderr # apply only for events which was written to stderr to save CPU time
...
More details...
json_decode
It decodes a JSON string from the event field and merges the result with the event root.
If the decoded JSON isn't an object, the event will be skipped.
⚠ DEPRECATED. Use decode plugin with decoder: json instead.
More details...
json_encode
It replaces field with its JSON string representation.
Example:
pipelines:
example_pipeline:
...
actions:
- type: json_encode
field: server
...
It transforms {"server":{"os":"linux","arch":"amd64"}} into {"server":"{\"os\":\"linux\",\"arch\":\"amd64\"}"}.
More details...
It extracts fields from JSON-encoded event field and adds extracted fields to the event root.
The plugin extracts fields on the go and can work with incomplete JSON (e.g. it was cut by max size limit).
If the field value is incomplete JSON string, fields can be extracted from the remaining part which must be the first half of JSON,
e.g. fields can be extracted from {"service":"test","message":"long message", but not from "service":"test","message:"long message"}
because the start as a valid JSON matters.
If extracted field already exists in the event root, it will be overridden.
More details...
keep_fields
It keeps the list of the event fields and removes others.
Nested fields supported: list subfield names separated with dot.
Example:
fields: ["a.b.f1", "c"]
# event before processing
{
"a":{
"b":{
"f1":1,
"f2":2
}
},
"c":0,
"d":0
}
# event after processing
{
"a":{
"b":{
"f1":1
}
},
"c":0
}
NOTE: if fields param contains nested fields they will be removed.
For example fields: ["a.b", "a"] gives the same result as fields: ["a"].
See cfg.ParseNestedFields.
More details...
mask
Mask plugin matches event with regular expression and substitutions successfully matched symbols via asterix symbol.
You could set regular expressions and submatch groups.
Note: masks are applied only to string or number values.
Example 1:
pipelines:
example_pipeline:
...
actions:
- type: mask
masks:
- re: "\b(\d{1,4})\D?(\d{1,4})\D?(\d{1,4})\D?(\d{1,4})\b"
groups: [1,2,3]
...
Mask plugin can have white and black lists for fields using process_fields and ignore_fields parameters respectively.
Elements of process_fields and ignore_fields lists are json paths (e.g. message — field message in root,
field.subfield — field subfield inside object value of field field).
Note: process_fields and ignore_fields cannot be used simultaneously.
Example 2:
pipelines:
example_pipeline:
...
actions:
- type: mask
ignore_fields:
- trace_id
masks:
- re: "\b(\d{1,4})\D?(\d{1,4})\D?(\d{1,4})\D?(\d{1,4})\b"
groups: [1,2,3]
...
All masks will be applied to all fields in the event except for the trace_id field in the root of the event.
Example 3:
pipelines:
example_pipeline:
...
actions:
- type: mask
process_fields:
- message
masks:
- re: "\b(\d{1,4})\D?(\d{1,4})\D?(\d{1,4})\D?(\d{1,4})\b"
groups: [1,2,3]
...
All masks will be applied only to message field in the root of the event.
Also process_fields and ignore_fields lists can be used on per mask basis. In that case, if a mask has
non-empty process_fields or ignore_fields and there is non-empty process_fields or ignore_fields
in plugin parameters, mask fields lists will override plugin lists.
Example 3:
pipelines:
example_pipeline:
...
actions:
- type: mask
ignore_fields:
- trace_id
masks:
- re: "\b(\d{1,4})\D?(\d{1,4})\D?(\d{1,4})\D?(\d{1,4})\b"
groups: [1,2,3]
- re: "(test)"
groups: [1]
process_fields:
- message
...
The first mask will be applied to all fields in the event except for the trace_id field in the root of the event.
The second mask will be applied only to message field in the root of the event.
More details...
modify
It modifies the content for a field or add new field. It works only with strings.
You can provide an unlimited number of config parameters. Each parameter handled as cfg.FieldSelector:cfg.Substitution.
When _skip_empty is set to true, the field won't be modified/added in the case of field value is empty.
Note: When used to add new nested fields, each child field is added step by step, which can cause performance issues.
Example:
pipelines:
example_pipeline:
...
actions:
- type: modify
my_object.field.subfield: value is ${another_object.value}.
...
The resulting event could look like:
{
"my_object": {
"field": {
"subfield":"value is 666."
}
},
"another_object": {
"value": 666
}
More details...
move
It moves fields to the target field in a certain mode.
- In
allow mode, the specified fields will be moved
- In
block mode, the unspecified fields will be moved
More details...
parse_es
It parses HTTP input using Elasticsearch /_bulk API format. It converts sources defining create/index actions to the events. Update/delete actions are ignored.
Check out the details in Elastic Bulk API.
More details...
parse_re2
It parses string from the event field using re2 expression with named subgroups and merges the result with the event root.
More details...
remove_fields
It removes the list of the event fields and keeps others.
Nested fields supported: list subfield names separated with dot.
Example:
fields: ["a.b.c"]
# event before processing
{
"a": {
"b": {
"c": 100,
"d": "some"
}
}
}
# event after processing
{
"a": {
"b": {
"d": "some" # "c" removed
}
}
}
If field name contains dots use backslash for escaping.
Example:
fields:
- exception\.type
# event before processing
{
"message": "Exception occurred",
"exception.type": "SomeType"
}
# event after processing
{
"message": "Exception occurred" # "exception.type" removed
}
More details...
rename
It renames the fields of the event. You can provide an unlimited number of config parameters. Each parameter handled as cfg.FieldSelector:string.
When override is set to false, the field won't be renamed in the case of field name collision.
Sequence of rename operations isn't guaranteed. Use different actions for prioritization.
Note: if the renamed field name starts with underscore "_", it should be escaped with preceding underscore. E.g.
if the renamed field is "_HOSTNAME", in config it should be "___HOSTNAME". Only one preceding underscore is needed.
Renamed field names with only one underscore in config are considered as without preceding underscore:
if there is "_HOSTNAME" in config the plugin searches for "HOSTNAME" field.
Example common:
pipelines:
example_pipeline:
...
actions:
- type: rename
override: false
my_object.field.subfield: new_sub_field
...
Input event:
{
"my_object": {
"field": {
"subfield":"value"
}
}
}
Output event:
{
"my_object": {
"field": {
"new_sub_field":"value" # renamed
}
}
}
Example journalctl:
pipelines:
example_pipeline:
...
actions:
- type: rename
override: false
__HOSTNAME: host
___REALTIME_TIMESTAMP: ts
...
Input event:
{
"_HOSTNAME": "example-host",
"__REALTIME_TIMESTAMP": "1739797379239590"
}
Output event:
{
"host": "example-host", # renamed
"ts": "1739797379239590" # renamed
}
More details...
set_time
It adds time field to the event.
More details...
split
It splits array of objects into different events.
For example:
{
"data": [
{ "message": "go" },
{ "message": "rust" },
{ "message": "c++" }
]
}
Split produces:
{ "message": "go" },
{ "message": "rust" },
{ "message": "c++" }
Parent event will be discarded.
If the value of the JSON field is not an array of objects, then the event will be pass unchanged.
More details...
throttle
It discards the events if pipeline throughput gets higher than a configured threshold.
More details...
Generated using insane-doc