Update the GPG key for Treasure Agent

Hi folks,

This article is for Treasure Agent users.

We used SHA1 based GPG key for td-agent package signing, but SHA1 has beed deprecated. For example, apt will remove SHA1 support: Teams/Apt/Sha1Removal - Debian Wiki

So we have updated Treasure Agent's GPG key for deb/rpm to drop SHA1 based signing. It means you need to update imported old GPG key before td-agent update.

If new deployment or if you disable gpg check, no need update action.

Here is an update steps for deb/rpm.

deb

Remove old GPG key

% apt-key del A12E206F

Import new GPG key

% curl -O https://packages.treasuredata.com/GPG-KEY-td-agent
% apt-key add GPG-KEY-td-agent

You can check imported is succeeded or not.

% apt-key list

Error content

Here is error example with old GPG key

W: GPG error: http://packages.treasuredata.com/2/ubuntu/xenial xenial InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 901F9177AB97ACBE

rpm

Remove old GPG key

If you found this key in rpm, remove it by following command:

% rpm -e --allmatches "gpg-pubkey-a12e206f-*"

Imoprt new GPG key

Import by rpm import

% rpm --import https://packages.treasuredata.com/GPG-KEY-td-agent

You can check imported is succeeded or not.

% rpm -qi "gpg-pubkey-ab97acbe-*"

Error content

Here is error example with old GPG key

The GPG keys listed for the "TreasureData" repository are already installed but they are not correct for this package.
Check that the correct key URLs are configured for this repository.


 Failing package is: td-agent-2.3.4-0.el7.x86_64
 GPG Keys are configured as: https://packages.treasuredata.com/GPG-KEY-td-agent

Read More

Fluentd v0.12.31 has been released

Hi users!

We have released Fluentd version 0.12.31. Here are the changes:

New features / Enhancement

  • output: Add slow_flush_log_threshold parameter: #1366
  • formatter_csv: Change fields parameter to required. Now accepts both a,b and ["a", "b"]: #1361
  • in_syslog: Add priority_key and facility_key parameters: #1351

Add slow_flush_log_threshold parameter

We introduce slow_flush_log_threshold parameter to investigate flush performance issue. Fluentd users sometimes hit BufferQueueLimitError without errors. In such case, users hard to find the cause, traffic is high or output destination becomes slow?

With this parameter, users can know output destination has a problem or not. If you see following message in the fluentd log, your output destination or network has a problem and it causes slow chunk flush.

2016-12-19 12:00:00 +0000 [warn]: buffer flush took longer time than slow_flush_log_threshold: elapsed_time=15.0031226690043695 slow_flush_log_threshold=10.0 plugin_id="es_output"

In this example, slow_flush_log_threshold is 10.0 but chunk flush takes 15 seconds.

<match app.**>
  @type elasticsearch
  @id es_output
  slow_flush_log_threshold 10.0
  # ...
</filter>

The default value is 20.0. If your buffer chunk is small and network latency is low, set smaller value for better monitoring.


Lastly, v0.12.31 docker image has also been available on Docker Hub.


Happy logging!

Read More

Fluentd v0.14.10 has been released

Hi users!

We have just shipped Fluentd v0.14.10 including some API improvements and major bug fixes.

Here are major changes (full ChangeLog is available here):

We say again, fluentd v0.14 is still development version. If you try to use v0.14, check your configuration and plugins carefully.

Plugin implementation updates

We introduced socket/server plugin helpers and updated impelementations of some plugins:

  • in_forward
  • in_tcp
  • in_udp
  • out_forward

These updates are to support multi-process workers in future release, and may have a performance impact. Please let us know if you find performance regression.

in_tail: Improve input performance

Tail input plugin was optimized to read lines from files effectively. Improved line splitting performs 2x faster than previous versions in micro benchmark (not total Fluentd performance).

This fix may have positive impact if your environment is using in_tail for heavy traffic, and has a serious performance problem related to CPU usage.

in_syslog: Add options to inject priority and facility into records

The newly added 2 options priority_key and facility_key make it possible to inject priority/facility into emitted records.

Major bug fixes

  • compatibility layer: Fix compat output plugins to handle "utc" configuration parameter correctly #1319
  • parser plugins: Fix to set options of regular expressions correctly #1326
  • logger: Fix the bug to raise error for missing logger in v0.12 plugins #1332 #1344
  • out_forward: Fix a bug not to refer v0.12 style buffer configurations #1337
  • out_forward: Fix a bug to raise an error when expire_dns_cache parameter is specified #1346
  • out_file: Fix a bug to raise error about buffer configuration when it is configured as secondary plugin #1338

Enjoy logging!

Read More

Fluentd joins the Cloud Native Computing Foundation

Fluentd is giving a big step forward in terms of adoption. It was not a surprise that it was early adopted by several companies and from a cloud perspective, it become the default standard to solve Logging in containerized environments. We were able to see how Docker users relies on Fluentd to scale logging while in the orchestration area, users from Kubernetes started their own integrations too, nowadays Kubernetes uses Fluentd to ship logs to Elasticsearch and Google Cloud Platform. Fluentd is having an organic grow.

From a project perspective, adoption is the key to succeeed. But to accomplish this adoption a project needs to be a good citizen with other components, a good integration is always desired. Fluentd adoption is merely thanks to the flexibility to adapt and integrate with other platforms, and of course solving a real-worl problem which is logging.

As you might noticed, in the last two years Fluentd team was very active sharing logging knowledge in several conference around the world, I'd say our biggest participation have been in LinuxCon Asia, Europe and NorthAmerica (in all it versions!). This kind of interaction was really positive to understand how the project could evolve even more. At the beginning of 2016, we were thinking about what would be the next natural step for Fluentd and we saw some lights when Linux Foundation announced the new Cloud Native Computing Foundation (aka CNCF).

Since CNCF is a nonprofit organization committed to advancing the development of cloud native technology, it sound a really good fit for Fluentd, at that moment Google already donated Kubernetes to CNCF. One of the biggest benefits of CNCF compared to other foundations are:

  • Flexibility: development and roadmap continue being handled by the project. CNCF do not want to control that, so we can move forward very quickly.
  • Ecosystem: formally Fluentd can be part of a cloud native stack. Work together with Kubernetes team (and others) in a better way.
  • Investment: CNCF donates resources for documentation and give access to the CNCF Cluster (1k servers).
  • Awareness: Fluentd becomes recognized as the default Logging Cloud Native technology.

After a long process of review and technical discussions, the CNCF Technical Oversight Committee voted for Fluentd ending in positive result, Fluentd joins the CNCF and this was announced in the Opening Keynote session at CloudNativeCon (jump to minute 24:40):

Fluentd at CNCF

Note that Fluentd is a whole ecosystem, if you look around inside our Github Organization, you will see around 35 repositories including Fluentd service, plugins, languages SDKs and complement project such as Fluent Bit. All of them are part of CNCF now!.

In name of Treasure Data, I want thanks to every developer of Fluentd, plugins and SDKs who were very supportive on this transition for the project. Our code and community is part of something bigger now :) .

What's Next ?

Fluentd team continue working hard to make it even better, there is a long roadmap for v0.14 and looking forward for a v1.0 on Q1 of 2017. We want everybody continue be involved on this, this is a really exciting time in the Cloud Native Era and Fluentd Community is having a key role on it.

Read More

Fluentd v0.14.9 has been released

Hi users!

We have just shipped Fluentd v0.14.9 including built-in plugin migration and bug fixes.

Here are major changes (full ChangeLog is available here):

We say again, fluentd v0.14 is still development version. If you try to use v0.14, check your configuration and plugins carefully.

Migrate several plugins to v0.14 API

We continue to migrate built-in plugins to v0.14 API. Here are migrated plugins in this release:

  • in_http
  • in_forward
  • out_forward
  • out_file
  • out_exec
  • out_exec_filter

We describe important changes of these plugins.

in_http

We removed un-documented detach process feature from in_http because DetachMultiProcessMixin module is deprecated in v0.14. If you set detach process related parameters in your configuration, it is ignored.

out_forward

Since this version, time_as_integer parameter is now false. It means v0.14.9's out_forward can't forward data to v0.12's in_forward by default. You need to set time_as_integer true explicitly. We have already mentioned this point in v0.14.0 release, so we hope this change doesn't break your v0.14 to v0.12 forwarding.

out_file

v0.14 Plugin API provides placeholder feature. You can emulate fluent-plugin-forest by more flexible way. Here is configuration example:

<match mydata>
  @type file
  path /path/to/${key}
  <buffer time,key>
    flush_at_shutdown true
  </buffer>
  <format>
    @type json
  </format>
</match>

${key} refers the key field of event record. If you pass {"key":"foo"} record, actual output file becomes /path/to/foo. The important point is if you want to refer time, tag or record keys on path, you need to list keys in <buffer CHUNK_KEYS> (See also v0.14 Plugin API slide). Here are several examples:

# Popular time, tag and key case
<match mydata>
  @type file
  path /path/to/%Y/%m/%d/${tag}/${key} # path is /path/to/2016/11/15/mydata/foo_0.log
  <buffer time,tag,key>
    flush_at_shutdown true
  </buffer>
  # ...
</match>

# Of course, you can use more keys
<match mydata>
  @type file
  path /path/to/${key1}/${key2}/{key3} # If record is {"key1":"foo","key2":"bar","key3":"baz"}, path is /path/to/foo/bar/baz.20161115_0.log
  <buffer time,key1,key2,key3>
    flush_at_shutdown true
  </buffer>
  # ...
</match>

out_file plugin requires time in CHUNK_KEYS because its placeholder is used in path implicitly.

We have a plan to migrate other 3rd party plugins to v0.14, e.g s3, kafka and more. We can say "Goodbye fluent-plugin-forest!" in near future.

Port Parser filter plugin

fluent-plugin-parser plugin is widely used in the world, so we decided to port this plugin into the core.

Note that we changed invalid event handling. fluent-plugin-parser logs warning when log is invalid. On the other hand, built-in parser filter emits invalid events to built-in @ERROR label. In this result, you can process invalid events using other plugins.

<source>
  @type forward
</source>

<filter app.**>
  @type parser
  key_name log
  <parse>
    @type json
  </parse>
</filter>

# If log field is json, record comes here
<match app.**>
  @type stdout
</match>

<label @ERROR>
  # If log field is not json, record comes here. Store such events into local file.
  <match app.**>
    @type file
    # ...
  </match>
</label>

record_transformer filter: Change default behaviours

record_transformer changes and removes old behaviours.

  • auto_typecast is now true. It means the result of ${10 - 2} or ${record["int_field"]} is integer, not string, by default.
  • Remove ${tags} placeholder. Use tag_parts instead.

We also have a plan to remove ${key} placeholder in the next version. Use ${record["key"]} instead.

Major bug fixes

  • fluent-cat: Fix fluent-cat command to send sub-second precision time #1277
  • out_forward: fix not to raise error when out_forward is initialized as secondary #1313

Enjoy logging!

Read More

About Fluentd

Fluentd is an open source data collector to simplify log management.

Learn

Want to learn more about Fluentd? Check out these pages.

Follow Us!