¶ Log Machine¶ I'm just a log machine! |
|
Log Machine is not a logging framework, it is a wrapper or 'façade' around existing frameworks, such as Log4j, Commons Logging, Logback, etc. It will also wrap around the existing SLF4J logging façade. LM provides a clean and simple way to log events in your Java applications. The status quo is robust under the hood, but somewhat klunky and hard to maintain on the surface. Logging is code, and code is data, so by enhancing the logging statements we can generate better quality data. Another goal is to attempt to blur the lines between logging and analytics. They tend to occupy the same space, doing the same thing but for different masters (diagnostics versus metrics). By providing finer targeting of events, with richer metadata, logging becomes a more useful tool for capturing application metrics. |
|
¶ APIThe LogMachine fluent API provides the standard level-based |
log.to(Redis, User)
.because(ex)
.info("User {@ id} disconnected.", userID);
|
¶ MethodsThe basic logging methods are what you might expect to find in any of the existing frameworks, namely the level-based logging. LogMachine adds a few tricks to the basic repertoire of logging capabilities. These and the other methods form a fluent chain, decorating the final logging statemtent with additional information. |
|
Log an event. If the current log level is too low, then no action will be taken. |
.error(String message)
.warn(String message)
.info(String message)
.debug(String message)
.trace(String message)
|
Log an event with an included exception. |
.error(String message, Throwable exception)
.warn(String message, Throwable exception)
.info(String message, Throwable exception)
.debug(String message, Throwable exception)
.trace(String message, Throwable exception)
|
Log an event, with a templated message. The arguments can be accessed
in order by using '{}' notation in your log message. This is similar
to what the SLF4J/Logback API provides. If the last object in the varargs list is
a See the Message Formatting section below for more information on that feature. |
.error(String message, Object...data)
.warn(String message, Object...data)
.info(String message, Object...data)
.debug(String message, Object...data)
.trace(String message, Object...data)
|
Check the log level. These are compatible with SLF4J. |
.isErrorEnabled()
.isWarnEnabled()
.isInfoEnabled()
.isDebugEnabled()
.isTraceEnabled()
|
Check the log level. These are the more concise LogMachine versions of the above statements. |
.isError()
.isWarn()
.isInfo()
.isDebug()
.isTrace()
|
Sets the exception for the event. |
.because(Throwable cause)
|
Sets the location for the log event, which could be a method name,
class name, or some other logical identifier. The event location
is initialized to the current 'class#method:line'. Calling this
method will replace that data. If |
.from(String location)
|
¶ Event DataA log event is often rich with data, but we usually settle for turning all of this data into a flattened string. Log Machine stores metadata along with the log messge, so that a detailed data structure can be assembled later which describes the event. Data can be assigned using the |
|
¶ Methods |
|
Adds a data point to the log event. |
.with(String name, Number value)
.with(String name, String value)
|
¶ Message FormattingA special formatting syntax allows data to be both read and
written via the message string associated with each log event.
When a pair of unescaped brackets For users of SLF4J, this syntax should be at least partially
familiar. That style of replacement where |
|
¶ Syntax |
|
Access the event arguments array, where the
first instance of |
log.error("failed to process chunk {} of {}", currentChunk, totalChunks);
|
Access event data by name. |
log.with("id", userID).to(User, Postgres, Create)
.info("created a new user with id {: id}");
|
Set new event data under the given name,
where the first |
log.info("created a new user with id {@ id}", userID);
|
Access the event arguments array by the
position number, where the first argument is |
log.info("created a new user with id {2}", userName, userID);
|
¶ Topic-Based LoggingTopics provide an alternative way to organize and filter your logging statements, instead of the usual package hierachy. In a sense, it decouples your package naming from your logging configuration, allowing you to configure appenders and levels for a set of topics instead packages. Here's an example: Let's say I'm creating a
new user in my database, which is PostgreSQL. When I'm going through
the log data sometime later, maybe to troubleshoot a bug, I want to
query for when the user with id That tends to make for noisy, imprecise results. Instead, using topics
I can query for only those events whose topics include Topics can be conveniently created from Strings and Enums in your
application. The |
log.to(Postgres, Users, Create)
.info("Created a new user with id '{@ id}'.", user.getId());
|
¶ Methods |
|
Sets the topics for the log event, which can be any thing you like. Strings, and Enums are the most likely sources of topics, stored within your application. |
.to(Topic...topics);
|
Access the event topics by the position number,
where the first argument is |
log.to(User, Postgres, Create)
.info("new user with id {@ id} stored to {~ 2}", userID);
|
Subscribe a |
TopicBroker.subscribe(component, TopicOne, TopicTwo);
|
The broker can also be used to set the threshold level
for one or more |
TopicBroker.setLevel(Level.INFO, TopicOne, TopicTwo);
|
You can also skip traditional logging by creating a
|
TopicLogMachine dbLog = new TopicLogMachine(Postgres);
|
¶ ImplementationsSeveral modules are available to connect the Log Machine API to an underlying logging implementation. (And adding new connectors for your favorite library is easy!)
¶ Logstash / ElasticSearch / KibanaOne great way to interact with your log data is to index it with a search index, like ElasticSearch. A common workflow might be:
This seems a bit roundabout. We just spent all that time decorating our log event with lots of good data, just to have it turned into a string, and then (to add insult to injury) parsed back in a haphazard, coupled way, before finally being indexed? Instead, LogMachine proposes this alternative:
Most applications can do this, as the volume to be indexed won't overwhelm the ES cluster. However in scaled out architectures you could still log to a file, just use the formatter which prints JSON in the proper format, and then reads it back in later. Or, the SQS component (below) can be used to store messages in a queue. ¶ Components
|
|
¶ Amazon SQSThe SQS component provides an appender which writes log events to an Amazon SQS queue. SQS is cheap, high throughput, and can be useful when you need a buffer between your application and your log visualizer. In combination with the ElasticSearch components, it is possible to write log events to an SQS queue in Logstash JSON format and then later read and index them into your search cluster. ¶ Components
|
|
|