Celery centralized logging

As someone of you that is already using celery already know extend the celery logger was a bit tricky until the last version, mainly because the logger object is not unique, the same handler is added to different logs object (Main Process logger, PoolWorker logger, TaskLogger).
For this reason the command logging.getLogger(“Celery”) give you back only the Main Process logger.
From the version 2.2.7 of Celery is possible to extend all the logs object by using two new signals after_setup_logger and after_setup_task_logger.

The following lines of code are all you need for have all celery workers logs on a syslog server

import logging
from celery.signals import after_setup_logger, after_setup_task_logger

def after_setup_logger_handler(sender=None, logger=None, loglevel=None,
                               logfile=None, format=None,
                               colorize=None, **kwds):
    handler = logging.handlers.SysLogHandler(address=('syslogserver', 514))


As you can see I added the same handler to bot signal, this because I need to add it to every task logger and to every global logger (Worker, Main Process).
Just pay attention to don’t connect the handler to the signal more than one time or you will have duplicate logs, for this reason don’t put this code in a configuration file but in a separated module that you are sure is included only one time.
Probably you want also to have a separated file for you celery logs on your rsyslog server, it’s enough to add a filter on the IP of your celery workers:

if $fromhost-ip startswith '192.0.2.' then /var/log/celery_central.log
& ~
if $fromhost-ip startswith '192.0.3.' then /var/log/celery_central.log
& ~

Don’t forget to put this rules before all the others and to enable the receiving of remote logs on your rsyslog confing file.
Off course here I’m using a syslog handler but you can extend logs in many ways at the end is just Python 🙂 .