summaryrefslogtreecommitdiffstats
path: root/source/configuration/modules/omhdfs.rst
blob: ccdf114734c9eef8ed14f916c57b575feda4cc29 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
***************************************
omhdfs: Hadoop Filesystem Output Module
***************************************

===========================  ===========================================================================
**Module Name:**             **omhdfs**
**Author:**                  `Rainer Gerhards <https://rainer.gerhards.net/>`_ <rgerhards@adiscon.com>
===========================  ===========================================================================


Purpose
=======

This module supports writing message into files on Hadoop's HDFS file
system.


Configuration Parameters
========================

.. note::

   Parameter names are case-insensitive.


|FmtObsoleteName| Directives
----------------------------

.. csv-table::
   :header: "|FmtObsoleteName| directive", "type", "default", "mandatory"
   :widths: auto
   :class: parameter-table

   "``$OMHDFSFileName``", "word", "none", "no"

The name of the file to which the output data shall be written.

.. csv-table::
   :header: "|FmtObsoleteName| directive", "type", "default", "mandatory"
   :widths: auto
   :class: parameter-table

   "``$OMHDFSHost``", "word", "default", "no"

Name or IP address of the HDFS host to connect to.

.. csv-table::
   :header: "|FmtObsoleteName| directive", "type", "default", "mandatory"
   :widths: auto
   :class: parameter-table

   "``$OMHDFSPort``", "integer", "0", "no"

Port on which to connect to the HDFS host.

.. csv-table::
   :header: "|FmtObsoleteName| directive", "type", "default", "mandatory"
   :widths: auto
   :class: parameter-table

   "``$OMHDFSDefaultTemplate``", "word", "RSYSLOG_FileFormat", "no"

Default template to be used when none is specified. This saves the work of
specifying the same template ever and ever again. Of course, the default
template can be overwritten via the usual method.


Caveats/Known Bugs
==================

Building omhdfs is a challenge because we could not yet find out how to
integrate Java properly into the autotools build process. The issue is
that HDFS is written in Java and libhdfs uses JNI to talk to it. That
requires that various system-specific environment options and paths be
set correctly. At this point, we leave this to the user. If someone knows
how to do it better, please drop us a line!

-  In order to build, you need to set these environment variables BEFORE
   running ./configure:

   -  JAVA\_INCLUDES - must have all include paths that are needed to
      build JNI C programs, including the -I options necessary for gcc.
      An example is
      # export
      JAVA\_INCLUDES="-I/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86\_64/include
      -I/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86\_64/include/linux"
   -  JAVA\_LIBS - must have all library paths that are needed to build
      JNI C programs, including the -l/-L options necessary for gcc. An
      example is
      # export export
      JAVA\_LIBS="-L/usr/java/jdk1.6.0\_21/jre/lib/amd64
      -L/usr/java/jdk1.6.0\_21/jre/lib/amd64/server -ljava -ljvm
      -lverify"

-  As of HDFS architecture, you must make sure that all relevant
   environment variables (the usual Java stuff and HADOOP's home
   directory) are properly set.
-  As it looks, libhdfs makes Java throw exceptions to stdout. There is
   no known work-around for this (and it usually should not case any
   troubles.


Examples
========

Example 1
---------

.. code-block:: none

   $ModLoad omhdfs
   $OMHDFSFileName /var/log/logfile \*.\* :omhdfs: