Hi users!
We have released v1.2.5. ChangeLog is here.
This release fixed in_tail
resource leak.
in_tail
with log rotation files,
we recommend to update fluentd to this version.Enjoy logging!
Hi users!
We have released v1.2.4. ChangeLog is here. This release is for bug fixes.
output: Consider timezone with larger timekey. This fixes unexpected buffer flush with 1d
timekey.
server helper: Fix connection leak by close timing issue
Thanks for submitting bug reports and patches :)
Enjoy logging!
Hi users!
We have released v1.2.2. ChangeLog is here. This release is mainly for bug fixes.
remove_key_name_field
parameterWith reserve_data true
, key_name
field is kept in the record.
Original record:
{"k1":"v1","log":"{\"k2\":\"v2\"}"}
Parsed result:
{"k1":"v1","log":"{\"k2\":\"v2\"}","k2":"v2"}
But we often don't need original key_name
field when parsing is succeeded.
You can remove key_name
field with remove_key_name_field true
.
Parsed result is below:
{"k1":"v1","k2":"v2"}
@queued_num
itemsThanks for submitting bug reports and patches :)
Enjoy logging!
Hi users!
We have released v1.2.1. ChangeLog is here. This release is mainly for bug fixes.
wait
to ClientThis is handy API for error check. Using get
, you need to check error response from the response body.
wait
raises an exception when API call has a problem.
require 'fluent/plugin/filter'
module Fluent
module Plugin
class CounterFilter < Filter
Plugin.register_filter('counter', self)
helpers :counter
def start
super
@client = counter_client_create(scope: 'test')
@client.establish('counter')
# if init call returns an error, exception is raised
begin
@client.init(:name => 'num', type: 'numeric', reset_interval: 10).wait
rescue Fluent::Counter::BaseError
# process client specific error
end
end
# ...
end
end
end
gzip
and append true
Thanks for submitting bug reports and patches :)
Enjoy logging!
Hi users!
We have released v1.2.0. ChangeLog is here. This release includes new features and improvements.
Fluentd receives various events from various data sources. This sometimes have a problem in Output plugins.
For example, if one application generates invalid events for data destination, e.g. schema mismatch, buffer flush always failed. Other case is generated events are invalid for output configuration, e.g. required field is missing.
Fluentd has retry feature for temporal failures but there errors are never succeeded. So Fluentd should not retry unexpected "broken chunks".
Since v1.2.0, fluentd routes broken chunks to backup directory.
By default, backup root directory is /tmp/fluent
. If you set root_dir
in <system>
, root_dir
is used.
File path consists of several parameters for unique path and
${root_dir}/backup/worker${worker_id}/${plugin_id}/{chunk_id}.log
is path template.
If you have following configuration and unrecoverable error happens inside @type sql
plugin,
<system>
root_dir /var/log/fluentd
</system>
# ...
<match app.**>
@type sql
@id out_sql
<buffer>
# ...
</buffer>
</match>
chunk is routed to /var/log/fluentd/backup/worker0/out_sql/56644156d63a108eda2f487c58140736.log
.
Currently, fluentd routes chunks to backup directory when Output plugin raises following errors during buffer flush.
Fluent::UnrecoverableError
: Output plugin can raise this error to avoid retry for plugin specific case.TypeError
: This error sometimes happens when an event has unexpected type in target field.ArgumentError
: This error sometimes happens when library usage is wrong in plugins.NoMethodError
: This error sometimes happens when events and configuration are mismatched.Fluentd continues to retry buffer flush when other error happens.
This is for plugin developers. Counter API consists of server and client. Server stores counter values and client calls APIs to store values to server.
Here is example configuration:
<system>
<counter_server>
scope test
bind 127.0.0.1
port 25000
backup_path /tmp/counter_backup.json
</counter_server>
<counter_client>
host 127.0.0.1
port 25000
</counter_client>
</system>
And filter implementation example:
require 'fluent/plugin/filter'
module Fluent
module Plugin
class CounterFilter < Filter
Plugin.register_filter('counter', self)
helpers :counter
def start
super
@client = counter_client_create(scope: 'test') # scope value is same with `<counter_server>`
@client.establish('counter').get
@client.init(:name => 'num', type: 'numeric', reset_interval: 10).get
end
def filter(tag, time, record)
@client.inc(:name => 'num', :value => 5)
p @client.get('num').data.first["current"] # Show current value
record
end
end
end
end
This API is useful for storing metrics under multiprocess environment. We will add API documents.
<and>
and <or>
sectionsgrep
filter now supports and
/ or
condition for <regexp>
/ <exclude>
Before v1.2.0, grep
filter supports only and
<regexp>
and or
<exclude>
patterns.
<filter pattern>
# These <regexp>s are "and"
<regexp>
key level
pattern ^ERROR|WARN$
</regexp>
<regexp>
key method
pattern ^GET|POST$
</regexp>
# These <exclude>s are "or"
<exclude>
key level
pattern ^WARN$
</exclude>
<exclude>
key method
pattern ^GET$
</exclude>
</filter>
v1.2.0 adds <and>
and <or>
sections to support more patterns.
Here is configuration example:
<filter pattern>
<or>
<regexp>
key level
pattern ^ERROR|WARN$
</regexp>
<regexp>
key method
pattern ^GET|POST$
</regexp>
</or>
<and>
<exclude>
key level
pattern ^WARN$
</exclude>
<exclude>
key method
pattern ^GET$
</exclude>
</and>
</filter>
If you pass this data streams:
{"time" : "2013/01/13T07:02:11.124202", "level" : "INFO", "method" : "GET", "path" : "/ping"}
{"time" : "2013/01/13T07:02:13.232645", "level" : "WARN", "method" : "POST", "path" : "/auth"}
{"time" : "2013/01/13T07:02:21.542145", "level" : "WARN", "method" : "GET", "path" : "/favicon.ico"}
{"time" : "2013/01/13T07:02:43.632145", "level" : "WARN", "method" : "POST", "path" : "/login"}
{"time" : "2013/01/13T07:02:44.959307", "level" : "ERROR", "method" : "POST", "path" : "/login"}
{"time" : "2013/01/13T07:02:45.444992", "level" : "ERROR", "method" : "GET", "path" : "/ping"}
{"time" : "2013/01/13T07:02:51.247941", "level" : "WARN", "method" : "GET", "path" : "/info"}
{"time" : "2013/01/13T07:02:53.108366", "level" : "WARN", "method" : "POST", "path" : "/ban"}
filtered result is below:
{"time" : "2013/01/13T07:02:11.124202", "level" : "INFO", "method" : "GET", "path" : "/ping"}
{"time" : "2013/01/13T07:02:13.232645", "level" : "WARN", "method" : "POST", "path" : "/auth"}
{"time" : "2013/01/13T07:02:43.632145", "level" : "WARN", "method" : "POST", "path" : "/login"}
{"time" : "2013/01/13T07:02:44.959307", "level" : "ERROR", "method" : "POST", "path" : "/login"}
{"time" : "2013/01/13T07:02:45.444992", "level" : "ERROR", "method" : "GET", "path" : "/ping"}
{"time" : "2013/01/13T07:02:53.108366", "level" : "WARN", "method" : "POST", "path" : "/ban"}
Thanks for submitting bug reports and patches :)
Enjoy logging!
Hi users!
We have released v1.1.3. ChangeLog is here. This release includes several enhancements and bug fixes.
We can use array index for tag placeholder. This is useful for accessing tag parts.
<match app.**>
@type foo
param value-${tag[1]} # if tag is 'app.foo.bar', ${tag[1]} is 'foo'
</source>
Since v1.1.3, you can also use negative array index for tag placeholder. The behaviour is same as ruby's negative array index.
<match app.**>
@type foo
param value-${tag[-1]} # if tag is 'app.foo.bar', ${tag[-1]} is 'bar'
</source>
queued_chunks_limit_size
to control the number of queued chunksThis new queued_chunks_limit_size
parameter mitigates lots of queued chunks issue with frequent enqueuing.
Sometimes users set smaller flush_interval
, e.g. 1s
, for log forwarding. This is no problem on healthy environment.
But if the destination is slower or unstable, output's flush fails and retry is started.
In such situation, lots of small queued chunks are generated in the buffer and it consumes lots of fd resources when you use file
buffer.
To avoid this problem, queued_chunks_limit_size
is useful. If you set queued_chunks_limit_size 5
,
staged chunks are not enqueued until the number of waiting enqueued chunks is less than 5
.
Note that this check is for interval based enqueuing for now. It means if your staged chunk reaches chunk_limit_size
,
its chunks is enqueued even if the number of waiting enqueued chunks is greater than queued_chunks_limit_size
.
time
Thanks for submitting bug reports and patches :)
Enjoy logging!
Hi users!
We have released v1.1.1. ChangeLog is here. This release includes several enhancements and bug fixes..
We added ca_path
and client_cert_path
parameters to server
plugin helper.
You can now send data between fluent-bit and fluentd with mutual TLS authentication.
In in_forward
, put these parameters into <transport tls>
.
<source>
@type forward
# ... other parameters
<transport tls>
# ... other parameters
ca_path /path/to/ca_file
client_cert_auth true
</transport>
</source>
fluentd can't restart if unexpected broken file exist in buffer directory, because such files cause errors in resume routine. Since v1.1.1, if fluentd found broken chunks during resume, these files are skipped and deleted from buffer directory.
We have a plan to add backup feature for broken files in v1.2.0. It moves broken files into backup directory instead of deletion.
Thanks for submitting bug reports and patches :)
Enjoy logging!
Hi users!
We have released v1.1.0. ChangeLog is here. This release includes several new features.
Fluentd configuration supports embedded ruby code in "#{}"
string.
Many users use this feature to embed runtime value in plugin parameters.
This is conf example:
@id "out_foo#{ENV['SERVERENGINE_WORKER_ID']}}" # add worker id to plugin id under multi-process environment
tag "log.#{Socket.gethostname}" # Use hostname in tag parts
<record>
metadata "#{ENV['SERVER_ROLE']}-#{ENV['FOO']}"
</record>
We noticed setting hostname and worker_id is popular and current configuration is bit messy because
it depends on fluentd internal, e.g. SERVERENGINE_WORKER_ID
comes from serverengine used in Fluentd.
So we added hostname
and worker_id
short-cut to cover popular cases.
Here is new conf:
@id "out_foo#{worker_id}" # add worker id to plugin id under multi-process environment
tag "log.#{hostname}" # Use hostname in tag parts
If other popular case found, we will add new short-cut.
We have record_accessor helper for accessing nested field.
Since v1.1.0, this helper supports nested field deletion. This feature is useful in record_transformer
like plugins.
Syntax is same and you can delete nested field via accessor object in your plugin code.
deleter = record_accessor_create("$.key1.key2")
deleter.delete(record) # delete record["key1"]["key2"] field
record_transfomer
filter plugin supports this feature with remove_keys
parameter.
This is port from fluent-plugin-secure-forward. Use-case is same with Using private CA file and key. Change command name to fluent-ca-generate
.
You can change several values like CN/country/etc via command option. Check --help
for all options.
We changes buffer management in in_tcp
plugin and it breaks TLS socket handling.
We changed TLS socket structure and fixed the regression.
Enjoy logging!
Hi users!
We have released v1.0.2. ChangeLog is here. This announcement includes v1.0.1 changes.
This parameter is corresponding to SO_RCVBUF
of socket.
If you want to improve the performance of in_udp
, set larger size to this parameter.
In rename(file -> file.bak) -> truncate -> rename(file.bak -> file)
case, fluentd reads logs from the bottom.
It causes log lost when application appends logs after truncate
. Latest version reads logs from the head.
After active node is backed, standby node was still listed in active nodes. v1.0.2 fixes this regression. standby node is excluded from active nodes properly.
We have released v0.12.42 together. v0.12.42 includes same bug fixes.
Enjoy logging!
We announced Fluentd v1.0 at CloudNativeCon + KubeCon NA 2017.
See CNCF blog about detailed information: Fluentd v1.0 - Cloud Native Computing Foundation
The important point is v1.0 is built on top of v0.14 stable version. No need changes for upgrading from v0.14 to v1.0.
If you are interested in v1.0 features, see following slide:
We continue to update fluentd v0.12 but the main changes are backport and security fix. We focus on v1.0 development.
We have a plan to change stable tags used version from v0.12
to v1.0
at Jan 1, 2018.
If you want to keep to use v0.12 serise in your environment, specify v0.12
tag instead of stable
/latest
tags.
In addition, we don't update v0.14
tags anymore. Use v1.0
tags instead.
Finally, thanks all! We start new journey with you :)
Fluentd is an open source data collector to simplify log management.
2022-11-11: Fluentd v1.15.3 has been released
2022-11-08: td-agent v4.4.2 has been released
2022-08-23: td-agent v4.4.1 has been released
2022-08-22: Fluentd v1.15.2 has been released
2022-07-29: td-agent v4.4.0 has been released
Want to learn the basics of Fluentd? Check out these pages.
Couldn't find enough information? Let's ask the community!
You need commercial-grade support from Fluentd committers and experts?
©2010-2023 Fluentd Project. ALL Rights Reserved.
Fluentd is a hosted project under the Cloud Native Computing Foundation (CNCF). All components are available under the Apache 2 License.