Python 2.3 introduced the logging module to the Python standard library. logging provides a standard interface for outputting information from a running application. The classic example of a logging mechanism is writing data to a text file, which is a plain old log file. While the log file is likely the most common form of logging, the logging module provides the ability to send information to file-like objects, TCP and UDP sockets, email servers, the Unix syslog, the NT event log, memory buffers, and HTTP servers in addition to "real" files.
The logging library takes a modular approach and offers the several categories of components: loggers, handlers, filters, and formatters. Loggers expose the interface that application code directly uses. Handlers send the log records to the appropriate destination. Filters provide a finer grained facility for determining which log records to send on to a handler. Formatters specify the layout of the resultant log record.
Logger objects have a threefold job. First, they expose several methods to application code so that applications can log messages at runtime. Second, logger objects determine which log messages to act upon based upon severity (the default filtering facility) or filter objects. Third, logger objects pass along relevant log messages to all interested log handlers.
The most widely used methods on logger objects fall into two categories: configuration and message sending.
The configuration methods are:
setLevel() specifies the lowest-severity log message a logger
will handle, where
debug is the lowest built-in severity level and
critical is the highest built-in severity. For example, if the
severity level is
info, the logger will handle
critical messages and will ignore
removeFilter() add and remove
filter objects from the logger object. This article does not address
With the logger object configured, the following methods create log messages:
debug(log_message, [*args[, **kwargs]])
info(log_message, [*args[, **kwargs]])
warning(log_message, [*args[, **kwargs]])
error(log_message, [*args[, **kwargs]])
critical(log_message, [*args[, **kwargs]])
log(log_level, log_message, [*args[, **kwargs]])
critical() all create log records with a
log_message and a level that corresponds to their
respective method names.
log_message is actually a format string,
which may contain the standard string substitution syntax of
%f, and so on.
*args is a list
of objects that correspond with the substitution fields in
log_message. With regard to
**kwargs, the logging
methods care only about a keyword of
exc_info and use it to
determine whether to log exception information.
exception() creates a log message similar to
error(). The difference is that
exception() dumps a
stack trace along with it. Call this method only from an exception handler.
log() takes a log level as an explicit argument. This is a
little more verbose for logging messages than using the log level convenience
methods listed above, but this is how to log at custom log levels.
logging.getLogger([name]) returns a reference to a logger
instance with a name of
name if a name is provided, or
root if not. The names are period-separated (
hierarchical structures. Multiple calls to
with the same name will return a reference to the same logger object. Loggers
that are further down in the hierarchical list are children of loggers higher up in the list. For example, given a logger with a name of
foo, loggers with names of
foo.bam are all children of
foo. Child loggers propagate messages up to their parent loggers.
Because of this, it is unnecessary to define and configure all the loggers an
application uses. It is sufficient to configure a top-level logger and create
child loggers as needed.
Handler objects are responsible for dispatching the appropriate log messages
(based on the log messages' severity) to the handler's specified destination.
Logger objects can add zero or more handler objects to themselves with an
addHandler() method. As an example scenario, an application may
want to send all log messages to a log file, all log messages of
error or higher to stdout, and all messages of
critical to an email address. This scenario requires three
individual handlers where each hander is responsible for sending messages of a
specific severity to a specific location.
The standard library includes the following handlers:
This article uses only
FileHandler in its examples.
There are very few methods in a handler for application developers to concern themselves with. The only handler methods that seem relevant for application developers who are using the built-in handler objects (that is, not creating custom handlers) are the following configuration methods:
setLevel() method, just as in logger objects, specifies the
lowest severity that will be dispatched to the appropriate destination. Why are
setLevel() methods? The level set in the logger
determines which severity of messages it will pass to its handlers. The level
set in each handler determines which messages that handler will send on.
setFormatter() selects a Formatter object for this handler to use.
removeFilter() respectively configure
and deconfigure filter objects on handlers.
Application code should not directly instantiate and use handlers. Instead,
logging.Handler class is a base class that defines the
interface that all
Handlers should have and establishes some
default behavior that child classes can use (or override).
Formatter objects configure the final order, structure, and contents of the
log message. Unlike the base
logging.Handler class, application
code may instantiate formatter classes, although you could likely subclass the
formatter if your application needs special behavior. The constructor takes two
optional arguments: a message format string and a date format string. If there
is no message format string, the default is to use the raw message. If there is
no date format string, the default date format is:
with the milliseconds tacked on at the end.
The message format string uses
styled string substitution. Here is the
docstring, which describes the valid substitution strings and what they
%(name)s Name of the logger (logging channel) %(levelno)s Numeric logging level for the message (DEBUG, INFO, WARNING, ERROR, CRITICAL) %(levelname)s Text logging level for the message ("DEBUG", "INFO", "WARNING", "ERROR", "CRITICAL") %(pathname)s Full pathname of the source file where the logging call was issued (if available) %(filename)s Filename portion of pathname %(module)s Module (name portion of filename) %(lineno)d Source line number where the logging call was issued (if available) %(created)f Time when the LogRecord was created (time.time() return value) %(asctime)s Textual time when the LogRecord was created %(msecs)d Millisecond portion of the creation time %(relativeCreated)d Time in milliseconds when the LogRecord was created, relative to the time the logging module was loaded (typically at application startup time) %(thread)d Thread ID (if available) %(process)d Process ID (if available) %(message)s The result of record.getMessage(), computed just as the record is emitted
The following message format string will log the time in a human-readable format, the severity of the message, and the contents of the message, in that order:
"%(asctime)s - %(levelname)s - %(message)s"
Programmers can configure logging either by creating loggers, handlers, and formatters explicitly in a main module with the configuration methods listed above (using Python code), or by creating a logging config file. The following code is an example of configuring a very simple logger, a console handler, and a simple formatter in a Python module:
import logging #create logger logger = logging.getLogger("simple_example") logger.setLevel(logging.DEBUG) #create console handler and set level to debug ch = logging.StreamHandler() ch.setLevel(logging.DEBUG) #create formatter formatter = logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(message)s") #add formatter to ch ch.setFormatter(formatter) #add ch to logger logger.addHandler(ch) #"application" code logger.debug("debug message") logger.info("info message") logger.warn("warn message") logger.error("error message") logger.critical("critical message")
Running this module from the command line produces the following output:
jmjones@bean:~/logging $ python simple_logging_module.py 2005-03-19 15:10:26,618 - simple_example - DEBUG - debug message 2005-03-19 15:10:26,620 - simple_example - INFO - info message 2005-03-19 15:10:26,695 - simple_example - WARNING - warn message 2005-03-19 15:10:26,697 - simple_example - ERROR - error message 2005-03-19 15:10:26,773 - simple_example - CRITICAL - critical message
The following Python module creates a logger, handler, and formatter nearly identical to those in the example listed above, with the only difference being the names of the objects:
import logging import logging.config logging.config.fileConfig("logging.conf") #create logger logger = logging.getLogger("simpleExample") #"application" code logger.debug("debug message") logger.info("info message") logger.warn("warn message") logger.error("error message") logger.critical("critical message")
Here is the logging.conf file:
[loggers] keys=root,simpleExample [handlers] keys=consoleHandler [formatters] keys=simpleFormatter [logger_root] level=DEBUG handlers=consoleHandler [logger_simpleExample] level=DEBUG handlers=consoleHandler qualname=simpleExample propagate=0 [handler_consoleHandler] class=StreamHandler level=DEBUG formatter=simpleFormatter args=(sys.stdout,) [formatter_simpleFormatter] format=%(asctime)s - %(name)s - %(levelname)s - %(message)s datefmt=
The output is nearly identical to that of the non-config-file-based example:
jmjones@bean:~/logging $ python simple_logging_config.py 2005-03-19 15:38:55,977 - simpleExample - DEBUG - debug message 2005-03-19 15:38:55,979 - simpleExample - INFO - info message 2005-03-19 15:38:56,054 - simpleExample - WARNING - warn message 2005-03-19 15:38:56,055 - simpleExample - ERROR - error message 2005-03-19 15:38:56,130 - simpleExample - CRITICAL - critical message
The config file approach has a few advantages over the Python code approach.
The first is the separation of configuration and code. The second is the ability of noncoders to easily modify the logging properties. The third is the
listen() method, which causes the application to listen
on a socket for new configurations--and will update configurations at runtime
without forcing you to restart the application!
Here is a slight modification of the previous config file-based script:
#!/usr/bin/env python import logging import logging.config import time import os #specify logging config file logging.config.fileConfig("logging.conf") #create and start listener on port 9999 t = logging.config.listen(9999) t.start() #create logger logger = logging.getLogger("simpleExample") #watch for existence of file named "f" #loop through the code while this file exists while os.path.isfile('f'): logger.debug("debug message") logger.info("info message") logger.warn("warn message") logger.error("error message") logger.critical("critical message") time.sleep(5) #cleanup logging.config.stopListening() t.join()
That was simple enough. Unfortunately, figuring out what format the config
file needs to be took some investigation. The only useful information that I
found in the Python Standard Library Reference documentation was in the logging
configuration section under the
Logging configurations will be sent as a file suitable for processing by fileConfig().
Pushing a filename did not work. Pushing the contents of a config file did
not work. I had to dig into the source a little. The docstring for the logging
handle() method is:
Handle a request. Each request is expected to be a 4-byte length, followed by the config file. Uses fileConfig() to do the grunt work.
This struck me as a bit odd. Does that mean you can send lengths only for
config files of up to 9,999 bytes? Converting the length of the config file to a
string did not work either. I looked farther down in the source code of the
handle() method. Aaahh. It does a
so the socket expects the first 4 bytes to be packed binary data. I tried
it this way and it worked. The following snippet of code sends the contents of
the file named on the command line to localhost:9999:
#!/usr/bin/env python import socket import sys import struct HOST = 'localhost' PORT = 9999 s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) print "connecting..." s.connect((HOST, PORT)) print "connected..." data_to_send = open(sys.argv, "r").read() print "sending length..." s.send(struct.pack(">L", len(data_to_send))) print "sending data..." s.send(data_to_send) print "closing..." s.close()
This article contains several examples of simple usage of the logging module. Two additional examples deserve special attention: using multiple handlers or formatters and using logging in multimodule Python applications.
Loggers are plain Python objects. The
addHandler() method has
no minimum or maximum quota for the number of handlers you may add. Sometimes it will be beneficial for an application to log all messages of all severities to a
text file while simultaneously logging errors or above to the console. To set
this up, simply configure the appropriate handlers. The logging calls in the
application code will remain unchanged. Here is a slight modification to the
previous simple module-based configuration example:
#!/usr/bin/env python import logging #create logger logger = logging.getLogger("simple_example") logger.setLevel(logging.DEBUG) #create console handler and set level to error ch = logging.StreamHandler() ch.setLevel(logging.ERROR) #create file handler and set level to debug fh = logging.FileHandler("spam.log") fh.setLevel(logging.DEBUG) #create formatter formatter = logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(message)s") #add formatter to ch and fh ch.setFormatter(formatter) fh.setFormatter(formatter) #add ch and fh to logger logger.addHandler(ch) logger.addHandler(fh) #"application" code logger.debug("debug message") logger.info("info message") logger.warn("warn message") logger.error("error message") logger.critical("critical message")
Notice that the "application" code did not change between the single-handler
example and this one. All that changed was the addition and
configuration of a new handler named
The ability to create new handlers with higher- or lower-severity filters can
be very helpful when writing and testing an application. How many times have I
written a horde of
logger.debug instead of
logger.debug statements can remain intact in the source
code and remain dormant until you need them again. At that time, the only
change that needs to happen is to modify the severity level of the logger
and/or handler to debug.
I mentioned above that multiple calls to
logging.getLogger('someLogger') return a reference to the same
logger object. This is true not only within the same module, but also across modules
as long as it is in the same Python interpreter process. It is
true for references to the same object; additionally, application code can define and
configure a parent logger in one module and create (but not configure) a child
logger in a separate module, and all logger calls to the child will pass up to
the parent. Here is a main module:
#!/usr/bin/env python import logging import auxiliary_module #create logger with "spam_application" logger = logging.getLogger("spam_application") logger.setLevel(logging.DEBUG) #create file handler and set level to debug fh = logging.FileHandler("spam.log") fh.setLevel(logging.DEBUG) #create console handler and set level to error ch = logging.StreamHandler() ch.setLevel(logging.ERROR) #create formatter formatter = logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(message)s") #add formatter to fh fh.setFormatter(formatter) #add formatter to ch ch.setFormatter(formatter) #add fh to logger logger.addHandler(fh) #add ch to logger logger.addHandler(ch) logger.info("creating an instance of auxiliary_module.Auxiliary") a = auxiliary_module.Auxiliary() logger.info("created an instance of auxiliary_module.Auxiliary") logger.info("calling auxiliary_module.Auxiliary.do_something") a.do_something() logger.info("finished auxiliary_module.Auxiliary.do_something") logger.info("calling auxiliary_module.some_function()") auxiliary_module.some_function() logger.info("done with auxiliary_module.some_function()")
Here is the auxiliary module:
#!/usr/bin/env python import logging #create logger module_logger = logging.getLogger("spam_application.auxiliary") class Auxiliary: def __init__(self): self.logger = logging.getLogger("spam_application.auxiliary.Auxiliary") self.logger.info("creating an instance of Auxiliary") def do_something(self): self.logger.info("doing something") a = 1 + 1 self.logger.info("done doing something") def some_function(): module_logger.info("received a call to \"some_function\"")
The output looks like this:
2005-03-23 23:47:11,663 - spam_application - INFO - creating an instance of auxiliary_module.Auxiliary 2005-03-23 23:47:11,665 - spam_application.auxiliary.Auxiliary - INFO - creating an instance of Auxiliary 2005-03-23 23:47:11,665 - spam_application - INFO - created an instance of auxiliary_module.Auxiliary 2005-03-23 23:47:11,668 - spam_application - INFO - calling auxiliary_module.Auxiliary.do_something 2005-03-23 23:47:11,668 - spam_application.auxiliary.Auxiliary - INFO - doing something 2005-03-23 23:47:11,669 - spam_application.auxiliary.Auxiliary - INFO - done doing something 2005-03-23 23:47:11,670 - spam_application - INFO - finished auxiliary_module.Auxiliary.do_something 2005-03-23 23:47:11,671 - spam_application - INFO - calling auxiliary_module.some_function() 2005-03-23 23:47:11,672 - spam_application.auxiliary - INFO - received a call to "some_function" 2005-03-23 23:47:11,673 - spam_application - INFO - done with auxiliary_module.some_function()
logging was a great addition to the standard library. It
provides application developers a simple, single interface for outputting
information from a running application. On the very simple end, it can write
information to a standard log file. On the more advanced end, it can write the
same information to a socket--and it can do both without having to rewrite a
single line of application code or even restart the process. So the next
time you start to put a bunch of
logger.debug(). You won't be disappointed.
Jeremy Jones is a software engineer who works for Predictix. His weapon of choice is Python.
Return to the Python DevCenter.
Copyright © 2009 O'Reilly Media, Inc.