Zabbix can be used for centralized monitoring and analysis of log files with/without log rotation support.
Notifications can be used to warn users when a log file contains certain strings or string patterns.
To monitor a log file you must have:
The size limit of a monitored log file depends on large file support.
Make sure that in the agent configuration file:
Configure a log monitoring item:
Specifically for log monitoring items you enter:
Type | Select Zabbix agent (active) here. |
Key | Use one of the following item keys: log[] or logrt[] These two item keys allow to monitor logs and filter log entries by the content regexp, if present. For example: log[/var/log/syslog,error] . Make sure that the file has read permissions for the 'zabbix' user otherwise the item status will be set to 'unsupported'.See supported Zabbix agent item key section for details on using these item keys and their parameters. |
Type of information | Select Log here. |
Update interval (in sec) | The parameter defines how often Zabbix agent will check for any changes in the log file. Setting it to 1 second will make sure that you get new records as soon as possible. |
Log time format | In this field you may optionally specify the pattern for parsing the log line timestamp. If left blank the timestamp will not be parsed. Supported placeholders: * y: Year (0001-9999) * M: Month (01-12) * d: Day (01-31) * h: Hour (00-23) * m: Minute (00-59) * s: Second (00-59) For example, consider the following line from the Zabbix agent log file: " 23480:20100328:154718.045 Zabbix agent started. Zabbix 1.8.2 (revision 11211)." It begins with six character positions for PID, followed by date, time, and the rest of the line. Log time format for this line would be "pppppp:yyyyMMdd:hhmmss". Note that "p" and ":" chars are just placeholders and can be anything but "yMdhms". |
* The agent also internally uses inode numbers (on UNIX/GNU/Linux), file indexes (on Microsoft Windows) and MD5 sums of the first 512 log file bytes for improving decisions when logfiles get truncated and rotated.
* On UNIX/GNU/Linux systems it is assumed that the file systems where log files are stored report inode numbers, which can be used to track files.
* On Microsoft Windows Zabbix agent determines the file system type the log files reside on and uses:
* On NTFS file systems 64-bit file indexes.
* On ReFS file systems (only from Microsft Windows Server 2012) 128-bit file IDs.
* On file systems where file indexes change (e.g. FAT32, exFAT) a fall-back algorithm is used to take a sensible approach in uncertain conditions when log file rotation results in multiple log files with the same last modification time.
* The inode numbers, file indexes and MD5 sums are internally collected by Zabbix agent. They are not transmitted to Zabbix server and are lost when Zabbix agent is stopped.
* Do not modify the last modification time of log files with 'touch' utility, do not copy a log file with later restoration of the original name (this will change the file inode number). In both cases the file will be counted as different and will be analyzed from the start, which may result in duplicated alerts.
* If there are several matching log files for ''logrt[]'' item and Zabbix agent is following the most recent of them and this most recent log file is deleted, a warning message ''"there are no files matching "<regexp mask>" in "<directory>"'' is logged. Zabbix agent ignores log files with modification time less than the most recent modification time seen by the agent for the ''logrt[]'' item being checked.
* Zabbix **2.2.10** fixes an issue [[https://support.zabbix.com/browse/ZBX-9290|ZBX-9290]] (unexpected re-reading of the whole log file from the beginning).
* The agent starts reading the log file from the point it stopped the previous time.
* The number of bytes already analyzed (the size counter) and last modification time (the time counter) are stored in the Zabbix database and are sent to the agent to make sure the agent starts reading the log file from this point in cases when the agent is just started or has received items which were previously disabled or not supported.
* Whenever the log file becomes smaller than the log size counter known by the agent, the counter is reset to zero and the agent starts reading the log file from the beginning taking the time counter into account.
* For ''logrt'' items, if there are several matching files with the same last modification time in the directory:
* before Zabbix **2.2.4** the agent will read lexicographically the smallest one.
* since Zabbix **2.2.4**:
* The agent tries to correctly analyze all log files with the same modification time and avoid skipping data or analyzing the same data twice, although it cannot be guaranteed in all situations.
* The agent does not assume any particular log file rotation scheme nor determines one. When presented multiple log files with the same last modification time, the agent will process them in a lexicographically descending order. Thus, for some rotation schemes the log files will be analyzed and reported in their original order. For other rotation schemes the original log file order will not be honored, which can lead to reporting matched log file records in altered order (the problem does not happen if log files have different last modification times).
* Zabbix agent processes new records of a log file once per //Update interval// seconds.
* Zabbix agent does not send more than **maxlines** of a log file per second. The limit prevents overloading of network and CPU resources and overrides the default value provided by **MaxLinesPerSecond** parameter in the [[:manual:appendix:config:zabbix_agentd|agent configuration file]].
* To find the required string Zabbix will process 4 times more new lines than set in MaxLinesPerSecond. Thus, for example, if a ''log[]'' or ''logrt[]'' item has //Update interval// of 1 second, by default the agent will analyse no more than 400 log file records and will send no more than 100 matching records to Zabbix server in one check. By increasing **MaxLinesPerSecond** in the agent configuration file or setting **maxlines** parameter in the item key, the limit can be increased up to 4000 analysed log file records and 1000 matching records sent to Zabbix server in one check. If the //Update interval// is set to 2 seconds the limits for one check would be set 2 times higher than with //Update interval// of 1 second.
* Additionally, log values are always limited to 50% of the agent send buffer size, even if there are no non-log values in it. So for the **maxlines** values to be sent in one connection (and not in several connections), the agent [[:manual:appendix:config:zabbix_agentd|BufferSize]] parameter must be at least maxlines x 2.
* In the absence of log items all agent buffer size is used for non-log values. When log values come in they replace the older non-log values as needed, up to the designated 50%.
* For log file records longer than 256kB, only the first 256kB are matched against the regular expression and the rest of the record is ignored. However, if Zabbix agent is stopped while it is dealing with a long record the agent internal state is lost and the long record may be analysed again and differently after the agent is started again. This limit is introduced since Zabbix **2.2.3**.
* Special note for "\" path separators: if file_format is "file\.log", then there should not be a "file" directory, since it is not possible to unambiguously define whether "." is escaped or is the first symbol of the file name.
* Regular expressions for ''logrt'' are supported in filename only, directory regular expression matching is not supported.
* On UNIX platforms a ''logrt[]'' item becomes NOTSUPPORTED if a directory where the log files are expected to be found does not exist.
* On Microsoft Windows if a directory does not exist the item does not become NOTSUPPORTED (for example, if directory is misspelled in item key). Note that before Zabbix 2.2.3 the item would become NOTSUPPORTED.
* An absence of log files for ''logrt[]'' item does not make it NOTSUPPORTED (before Zabbix 2.2.3 it caused NOTSUPPORTED).
* Errors of reading log files for ''logrt[]'' item are logged as warnings into Zabbix agent log file but do not make the item NOTSUPPORTED (before Zabbix 2.2.3 it caused NOTSUPPORTED).
* Zabbix agent log file can be helpful to find out why a ''log[]'' or ''logrt[]'' item became NOTSUPPORTED. Zabbix can monitor its agent log file except when at DebugLevel=4.
Sometimes we may want to extract only the interesting value from a target file instead of returning the whole line when a regular expression match is found.
Previously, if a regular expression match was found by Zabbix, the whole line containing the match was returned. Since Zabbix 2.2.0, log items have been extended to be able to extract desired values from these lines. This has been accomplished by adding the additional output parameter to log
and logrt
items.
output
allows to indicate the subgroup of the match that we may be interested in.
So, for example
should allow returning the entry count as found in the content of:
Fr Feb 07 2014 11:07:36.6690 */ Thread Id 1400 (GLEWF) large result
buffer allocation - /Length: 437136/Entries: 5948/Client Ver: >=10/RPC
ID: 41726453/User: AUser/Form: CFG:ServiceLevelAgreement
The reason why Zabbix will return only the number is because output
here is defined by \1 referring to the first and only subgroup of interest: ([0-9]+)
And, with the ability to extract and return a number, the value can be used to define triggers.