[Python-ideas] Fwd: [Python-Dev] An yocto change proposal in logging module to simplify structured logs support

Andrew Barnert abarnert at yahoo.com
Mon May 25 22:08:46 CEST 2015


On Monday, May 25, 2015 6:57 AM, Ludovic Gasc <gmludo at gmail.com> wrote:

>2015-05-25 4:19 GMT+02:00 Steven D'Aprano <steve at pearwood.info>:

>>At the other extreme, there is the structlog module:
>>
>>https://structlog.readthedocs.org/en/stable/
>
>Thank you for the link, it's an interesting project, it's like "logging" module but on steroids, some good logging ideas inside.


>However, in fact, if I understand correctly, it's the same approach that the previous recipe: Generate a log file with JSON content, use logstash-forwarder to reparse the JSON content, to finally send the structure to logstash, for the query part: https://structlog.readthedocs.org/en/stable/standard-library.html#suggested-configuration

>>How does your change compare to those?
>>
>
>
>In the use case of structlog, drop the logstash-forwarder step to interconnect directly Python daemon with structured log daemon.

>Even if logstash-forwarder should be efficient, why to have an additional step to rebuild a structure you have at the beginning ?

You can't send a Python dictionary over the wire, or store a Python dictionary in a database. You need to encode it to some transmission and/or storage format; there's no way around that. And what's wrong with using JSON as that format?

More importantly, when you drop logstash-forwarder, how are you intending to get the messages to the upstream server? You don't want to make your log calls synchronously wait for acknowledgement before returning. So you need some kind of buffering. And just buffering in memory doesn't work: if your service shuts down unexpectedly, you've lost the last batch of log messages which would tell you why it went down (plus, if the network goes down temporarily, your memory use becomes unbounded). You can of course buffer to disk, but then you've just reintroduced the same need for some kind of intermediate storage format you were trying to eliminate—and it doesn't really solve the problem, because if your service shuts down, the last messages won't get sent until it starts up again. So you could write a separate simple store-and-forward daemon that either reads those file buffers or listens on localhost UDP… but then you've just recreated logstash-forwarder.

And even if you wanted to do all that, I don't see why you couldn't do it all with structlog. They recommend using an already-working workflow instead of designing a different one from scratch, but it's just a recommendation.


More information about the Python-ideas mailing list