Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

 

Checklist

...

  •  User stories documented (Shankar)
  •  User stories reviewed (Nitin)
  •  Design documented (Shankar/Vinisha)
  •  Design reviewed (Terence/Andreas)
  •  Feature merged ()
  •  Examples and guides ()
  •  Integration tests () 
  •  Documentation for feature ()
  •  Blog post

Usecase

  • User wants to group log messages at application level and write multiple separate log files for each application. Example

...

  • : application-dir/{audit.log, metrics.log, debug.log}

...

  • User wants to write these log files to a configurable path in HDFS.
  • User also wants to be able to configure rolling policy for these log files similar to log-back.

...

  •  

User Stories

  1. For each application, user wants to  collect the application's logs into multiple logs files based on log level 

  2. For each application, user wants to configure a location in HDFS to be used to store the collected logs. 
  3. For each application, User wants the application log files stored in text format
  4. For each application, User would wants to configure the RotationPolicy of log files
  5. For each application, user wants to configure different layout for formatting log messages for the different log files generated.

Design

Introduce Log Processor, FileWriter and RotationPolicy interfaces. Pluggable in CDAP Log-saver. 

...

Code Block
public interface LogProcessor {

  /**
   * Called during initialize, passed properties for log processor.
   *
   * @param properties
   */
  void initialize(Properties properties);

  /**
   * Process method will be called with iterator of log messages, log messages received will be in sorted order,
   * sorted by timestamp. This method should not throw any exception,exceptions. ifIf any unchecked exceptions are thrown,
   * log.saver will log an error and the processor will not receive messages.
   * Will start receiving messages on log.saver startup.
   * 
   * @param events list of {@link LogEvent}
   */
  void process(Iterator<LogEvent> events);

  /**
   * stop logprocessor
   */
  void destroy();
}
Code Block
class LogEvent {
  /**
   * Logging event
   **/
  ILoggingEvent iLoggingEvent;
 
  /**
   * CDAP program entity-id
   **/
  EntityId entityId;
}

 

Currently, we only have AvroFileWriter in Log.saver, ; we can create an interface for users to configure the FileWriter to provide if needed. Thisprovides the option to abstract certain common logic for file rotation, maintaining created files, etc. in Log saver and a custom file writer can implement the other methods specific to it's its logic,

Example: Creating files in HDFS and maintaining the size of events processed is maintained by custom FileWriter extension.

 

Code Block
public interface FileWriterMultiFileWriter { 
  /**
   * get File manager for the log event. This file manager will be used to create, append events to file
-events, flush and close the file for the logging 
   * events of entityId (logging-context)
   */
  void appendgetFileManager(Iterator<LogEvent>LogEvent eventsevent);
  

}
Code Block
interface FileManager {
  /**
   * createBased aon filethe correspondinglogEvent, toget the entityId and timestamp and returnuse that information to create the file.
   **/
  File createFile(EntityId entityId, long timestamp);
 LogEvent logEvent); 

 /**
  * append log events to the currently active file belonging to the entityId represented by these log events. 
  * Logic : on the first append, we determine if the file has to be rotated or not using the RotationPolicy#shoudRotateFile(File file, LogEvent 
  * logEvent). If it has to be rotated, we will use RotationPolicy#rotateFile(File file, LogEvent logEvent) to rotate the file (close the old  
  * file) and append to the new file
  **/
  void appendEvents(Iterator<LogEvent> logEvents);  
 
  /**
   * close the currently active file.
   **/
  void close(File file, long timestamp);

  /**
   * flush the contents of the currently active file
   **/
  void flush();
}

 

 

Code Block
public interface RotationPolicy {
  /**
   * For the logEvent, decide if we should rotate the logcurrent file corresponding to this event or not.
   */
  boolean shouldRotateFile(File file, LogEvent logEvent);
 
  /**
   * For the logEvent, rotate the log file based on rotation logic and return the newly created File.
   */
  File rotateFile(File file, LogEvent logEvent);
 
  /**
   * For the logEvent, get the currently active file used for appending the log events.
   */ 		
  File getActiveFile(LogEvent logEvent);
}

 

Approach

Option-1

Log Processor/File Writer Extensions run in the same container as log.saver. 

...

1) Should the log processor plugins have separate class-loaders (or) can they share the same ClassLoader as the log.saver system. 

     Having isolation helps with processor extensions to depend on different libraries, but should we allow them ? 

2

2)  If we use same Class-loader as log.saver, dependencies of extensions can be added to the classpath, and the classes available in log.saver system (hadoop, proto, ec) can be filtered out from the extension, so we use the classes provided by log.saver.

3) However if there are multiple log.processor extensions, say one for writing to s3 and another for writing to splunk, the classes from their dependencies could  potentially conflict with each other if we use the system class-loader ?

4) If we create separate Class loader for each extension to provide class loader isolation - we need to expose the following 

  • cdap-watchdog-api
  • cdap-proto
  • hadoop
  • logback-classic ( we need ILoggingEvent)
  • should we expose more classes ? 
  • What if user wants to write to a kafka server or third-party storage s3 on the log.processor logic? Having separate class loader will help in these scenarios.

 


Sample Custom Log Plugin Implementation 

1) Log Processor would want to process the ILoggingEvent, format it to a log message string (maybe using log-back layout classes) and write it to a destination.

...