#!/usr/bin/perl
=head1 INSTALLATION
This plugin requires data from apache. You can get at the data in two ways:
1) Install the pipelogger (logs without using disk space, ram only, highly performant)
- Install /usr/share/munin/apache_pipelogger as executable for apache/wwwrun
- Install logger to httpd.conf
# Log vhost port method response_bytes response_time_ms httpd_status
CustomLog "|/usr/share/munin/apache_pipelogger" "$v %p %m %B %D %s"
2) Install the log parser as daemon (watches multiple access logs in a single folder for changes)
- the log parser should run as root (can simply be run in background)
- slightly less performant, but easier to apply to existing installations
- If you want response time stats, you have to log them in apache:
LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %D" combined-time
- Configure the log parser to match your installation regarding naming and log folders
You can use both solutions simultaneously, the data will be merged.
Be aware that a apache log CustomLog directive in the master config will only log those vhosts that have no directive of their own.
Install plugin conf (after [apache_*])
[apache_vhosts]
user root
env.subgraphs requests bytes time
env.checks requests bytes time
# user - probably necessary for shared memory IPC
# subgraphs - create multigraph subgraphs (watch your graphing performance...), default 0
# checks - enable stats on bytes and response times per request, you have to log these in apache
A word on performance:
Requests/sec should not be much of a problem. Pipelogger and Logparser should not have man performance problems, as the apply one regex per line and add some stats.
Stats are saved every n seconds (default: 7) to shared mem in serialized format. That should be ok on the most loaded servers (unless you watch cache logs).
I would estimate that > 10k log lines/sec could start becoming a problem, you might have to start tuning there or use a dedicated system.
You might think about splitting the logs over multiple Logparser scripts to parallelize and merge in larger intervals.
Graphing is another matter, the more vhosts you have.
With subgraphs off, you do 3 main graphs * 4 timescales (day, week, month, year).
With subgraphs on, you get 2 checks * (1 + 6 * #vhosts) + 1 check * (1 + #vhosts * #statuscodes * 4)
With hundreds of vhosts that becomes a problem, as munin-update and munin-html do not scale well.
Timeouts are another matter, munin-updates calls for the plugin-data and works on the received lines while the network timeout is running.
So expect to set your timeouts to 120s with a hundred vhosts.
=head1 MAGIC MARKERS
#%# family=auto
#%# capabilities=autoconf
=head1 LICENSE
GPLv2
=cut
my %checks = map {$_=>1} ( ($ENV{'checks'}) ? split(/ /,$ENV{'checks'}) : qw(requests bytes time) );
my %subgraphs= map {$_=>1} ( ($ENV{'subgraphs'}) ? split(/ /,$ENV{'subgraphs'}) : () );
use strict;
#use warnings;
use Munin::Plugin;
use IPC::ShareLite ':lock';
use Storable qw(freeze thaw);
my $share = IPC::ShareLite->new(
-key => 'mapl',
-create => 0,
-destroy => 0,
-exclusive => 0,
-mode => '0744'
) or die $!;
my %data=eval{%{thaw($share->fetch)}}; # using eval to suppress thaw error on empty string at the first run
if ( defined $ARGV[0] and $ARGV[0] eq "autoconf" ) {
if (scalar(keys %data)>0) {
print "yes\n";
exit 0;
} else {
print "no data available, apache_pipelogger not installed\n";
exit 0;
}
}
need_multigraph();
my ($config,$values);
#
# config
#
if ( defined $ARGV[0] and $ARGV[0] eq "config" ) {
foreach my $check (keys %checks) {
next if ($check eq 'requests'); # requests are special
my $order=join("_$check ",sort keys %data)."_$check";
#
# config: bytes / time + subgraphs
#
print <lock(LOCK_EX);
$share->store( freeze \%data );
$share->unlock();
exit 0;
# vim:syntax=perl