Fluentd v1.2.2 has been released

Hi users!

We have released v1.2.2. ChangeLog is here. This release is mainly for bug fixes.

filter_parser: Add remove_key_name_field parameter

With reserve_data true, key_name field is kept in the record.

Original record:

{"k1":"v1","log":"{\"k2\":\"v2\"}"}

Parsed result:

{"k1":"v1","log":"{\"k2\":\"v2\"}","k2":"v2"}

But we often don't need original key_name field when parsing is succeeded. You can remove key_name field with remove_key_name_field true. Parsed result is below:

{"k1":"v1","k2":"v2"}

Major bug fixes

  • buffer: Wait for all chunks being purged before deleting @queued_num items

Thanks for submitting bug reports and patches :)

Enjoy logging!

Read More

Fluentd v1.2.1 has been released

Hi users!

We have released v1.2.1. ChangeLog is here. This release is mainly for bug fixes.

Counter API: Add wait to Client

This is handy API for error check. Using get, you need to check error response from the response body. wait raises an exception when API call has a problem.

require 'fluent/plugin/filter'

module Fluent
  module Plugin
    class CounterFilter < Filter
      Plugin.register_filter('counter', self)

      helpers :counter

      def start
        super

        @client = counter_client_create(scope: 'test')
        @client.establish('counter')
        # if init call returns an error, exception is raised
        begin
          @client.init(:name => 'num', type: 'numeric', reset_interval: 10).wait
        rescue Fluent::Counter::BaseError
          # process client specific error
        end
      end

      # ...
    end
  end
end

Major bug fixes

  • intcp/inudp: Fix sourcehostnamekey to set hostname correctly
  • out_file: Temporal fix for broken gzipped files with gzip and append true

Thanks for submitting bug reports and patches :)

Enjoy logging!

Read More

Fluentd v1.2.0 has been released

Hi users!

We have released v1.2.0. ChangeLog is here. This release includes new features and improvements.

output: Backup for broken chunks

Fluentd receives various events from various data sources. This sometimes have a problem in Output plugins.

For example, if one application generates invalid events for data destination, e.g. schema mismatch, buffer flush always failed. Other case is generated events are invalid for output configuration, e.g. required field is missing.

Fluentd has retry feature for temporal failures but there errors are never succeeded. So Fluentd should not retry unexpected "broken chunks".

Since v1.2.0, fluentd routes broken chunks to backup directory. By default, backup root directory is /tmp/fluent. If you set root_dir in <system>, root_dir is used. File path consists of several parameters for unique path and ${root_dir}/backup/worker${worker_id}/${plugin_id}/{chunk_id}.log is path template.

If you have following configuration and unrecoverable error happens inside @type sql plugin,

<system>
  root_dir /var/log/fluentd
</system>
# ...
<match app.**>
  @type sql
  @id out_sql
  <buffer>
    # ...
  </buffer>
</match>

chunk is routed to /var/log/fluentd/backup/worker0/out_sql/56644156d63a108eda2f487c58140736.log.

Currently, fluentd routes chunks to backup directory when Output plugin raises following errors during buffer flush.

  • Fluent::UnrecoverableError: Output plugin can raise this error to avoid retry for plugin specific case.
  • TypeError: This error sometimes happens when an event has unexpected type in target field.
  • ArgumentError: This error sometimes happens when library usage is wrong in plugins.
  • NoMethodError: This error sometimes happens when events and configuration are mismatched.

Fluentd continues to retry buffer flush when other error happens.

New API: Counter API

This is for plugin developers. Counter API consists of server and client. Server stores counter values and client calls APIs to store values to server.

Here is example configuration:

<system>
  <counter_server>
    scope test
    bind 127.0.0.1
    port 25000
    backup_path /tmp/counter_backup.json
  </counter_server>
  <counter_client>
    host 127.0.0.1
    port 25000    
  </counter_client>
</system>

And filter implementation example:

require 'fluent/plugin/filter'

module Fluent
  module Plugin
    class CounterFilter < Filter
      Plugin.register_filter('counter', self)

      helpers :counter

      def start
        super

        @client = counter_client_create(scope: 'test') # scope value is same with `<counter_server>`
        @client.establish('counter').get
        @client.init(:name => 'num', type: 'numeric', reset_interval: 10).get
      end

      def filter(tag, time, record)
        @client.inc(:name => 'num', :value => 5)
        p @client.get('num').data.first["current"] # Show current value
        record
      end
    end
  end
end

This API is useful for storing metrics under multiprocess environment. We will add API documents.

filter_grep: Support for <and> and <or> sections

grep filter now supports and / or condition for <regexp> / <exclude> Before v1.2.0, grep filter supports only and <regexp> and or <exclude> patterns.

<filter pattern>
  # These <regexp>s are "and"
  <regexp>
    key level
    pattern ^ERROR|WARN$
  </regexp>
  <regexp>
    key method
    pattern ^GET|POST$
  </regexp>
  # These <exclude>s are "or"
  <exclude>
    key level
    pattern ^WARN$
  </exclude>
  <exclude>
    key method
    pattern ^GET$
  </exclude>
</filter>

v1.2.0 adds <and> and <or> sections to support more patterns.

Here is configuration example:

<filter pattern>
  <or>
    <regexp>
      key level
      pattern ^ERROR|WARN$
    </regexp>
    <regexp>
      key method
      pattern ^GET|POST$
    </regexp>
  </or>
  <and>
    <exclude>
      key level
      pattern ^WARN$
    </exclude>
    <exclude>
      key method
      pattern ^GET$
    </exclude>
  </and>
</filter>

If you pass this data streams:

{"time" : "2013/01/13T07:02:11.124202", "level" : "INFO", "method" : "GET", "path" : "/ping"}
{"time" : "2013/01/13T07:02:13.232645", "level" : "WARN", "method" : "POST", "path" : "/auth"}
{"time" : "2013/01/13T07:02:21.542145", "level" : "WARN", "method" : "GET", "path" : "/favicon.ico"}
{"time" : "2013/01/13T07:02:43.632145", "level" : "WARN", "method" : "POST", "path" : "/login"}
{"time" : "2013/01/13T07:02:44.959307", "level" : "ERROR", "method" : "POST", "path" : "/login"}
{"time" : "2013/01/13T07:02:45.444992", "level" : "ERROR", "method" : "GET", "path" : "/ping"}
{"time" : "2013/01/13T07:02:51.247941", "level" : "WARN", "method" : "GET", "path" : "/info"}
{"time" : "2013/01/13T07:02:53.108366", "level" : "WARN", "method" : "POST", "path" : "/ban"}

filtered result is below:

{"time" : "2013/01/13T07:02:11.124202", "level" : "INFO", "method" : "GET", "path" : "/ping"}
{"time" : "2013/01/13T07:02:13.232645", "level" : "WARN", "method" : "POST", "path" : "/auth"}
{"time" : "2013/01/13T07:02:43.632145", "level" : "WARN", "method" : "POST", "path" : "/login"}
{"time" : "2013/01/13T07:02:44.959307", "level" : "ERROR", "method" : "POST", "path" : "/login"}
{"time" : "2013/01/13T07:02:45.444992", "level" : "ERROR", "method" : "GET", "path" : "/ping"}
{"time" : "2013/01/13T07:02:53.108366", "level" : "WARN", "method" : "POST", "path" : "/ban"}

Major bug fixes

  • server helper: Close invalid socket when ssl error happens on reading
  • log: Fix unexpected implementation bug when log rotation setting is applied

Thanks for submitting bug reports and patches :)

Enjoy logging!

Read More

Fluentd v1.1.3 has been released

Hi users!

We have released v1.1.3. ChangeLog is here. This release includes several enhancements and bug fixes.

output: Support negative array index for tag placeholder

We can use array index for tag placeholder. This is useful for accessing tag parts.

<match app.**>
  @type foo
  param value-${tag[1]} # if tag is 'app.foo.bar', ${tag[1]} is 'foo'
</source>

Since v1.1.3, you can also use negative array index for tag placeholder. The behaviour is same as ruby's negative array index.

<match app.**>
  @type foo
  param value-${tag[-1]} # if tag is 'app.foo.bar', ${tag[-1]} is 'bar'
</source>

buffer: Add queued_chunks_limit_size to control the number of queued chunks

This new queued_chunks_limit_size parameter mitigates lots of queued chunks issue with frequent enqueuing.

Sometimes users set smaller flush_interval, e.g. 1s, for log forwarding. This is no problem on healthy environment. But if the destination is slower or unstable, output's flush fails and retry is started. In such situation, lots of small queued chunks are generated in the buffer and it consumes lots of fd resources when you use file buffer. To avoid this problem, queued_chunks_limit_size is useful. If you set queued_chunks_limit_size 5, staged chunks are not enqueued until the number of waiting enqueued chunks is less than 5.

Note that this check is for interval based enqueuing for now. It means if your staged chunk reaches chunk_limit_size, its chunks is enqueued even if the number of waiting enqueued chunks is greater than queued_chunks_limit_size.

Major bug fixes

  • output: Delete empty queued_num field after purging chunks. This fixes memory leak when chunk keys include time
  • out_forward: The node should be disabled when TLS socket for ack returns an error

Thanks for submitting bug reports and patches :)

Enjoy logging!

Read More

Fluentd v1.1.1 has been released

Hi users!

We have released v1.1.1. ChangeLog is here. This release includes several enhancements and bug fixes..

in_forward/server helper: Support mutual TLS authentication

We added ca_path and client_cert_path parameters to server plugin helper. You can now send data between fluent-bit and fluentd with mutual TLS authentication. In in_forward, put these parameters into <transport tls>.

<source>
  @type forward
  # ... other parameters
  <transport tls>
    # ... other parameters
    ca_path /path/to/ca_file
    client_cert_auth true
  </transport>
</source>

buf_file: Skip and delete broken chunk files during resume

fluentd can't restart if unexpected broken file exist in buffer directory, because such files cause errors in resume routine. Since v1.1.1, if fluentd found broken chunks during resume, these files are skipped and deleted from buffer directory.

We have a plan to add backup feature for broken files in v1.2.0. It moves broken files into backup directory instead of deletion.

Major bug fixes

  • json parser: Fix error handling for oj 3.x
  • in_tail: Fix watcher race condition during shutdown
  • in_http: Emit event time instead of raw time value in batch mode

Thanks for submitting bug reports and patches :)

Enjoy logging!

Read More

About Fluentd

Fluentd is an open source data collector to simplify log management.

Learn

Want to learn more about Fluentd? Check out these pages.

Follow Us!