Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 3 Next »

WIP

Use Case 

Users would like to have custom logic in filtering log messages at application level and write the log messages to a log files in HDFS. 

Design 

Introduce Log Processor and FileWriter interfaces Pluggable in CDAP Logsaver. 

API Design

Step1: LogProcessor

 

public interface LogProcessor {

  /**
   * Called during initialize, passed properties for log processor
   * @param properties
   */
  void initialize(Properties properties);

  /**
   * Process method will be called with iterator of log messages, log messages received will be in sorted order,
   * sorted by timestamp. This method should not throw any exception, if any unchecked exceptions are thrown,
   * log.saver will log an error and the processor will not receive messages.
   * Will start receiving messages on log.saver startup
   * 
   * @param events list of {@link LogEvent}
   */
  void process(Iterator<LogEvent> events);

  /**
   * stop logprocessor
   */
  void destroy();
}
class LogEvent {
  /**
   * Logging event
   **/
  ILoggingEvent iLoggingEvent;
 
  /**
   * CDAP program entity-id
   **/
  EntityId entityId;
}

 

Step2: FileWriter

Currently we only have AvroFileWriter in Log.saver, we can create an interface for users to configure the FileWriter to provide if needed. This provides the option to abstract certain common logic for file rotation, get previous files , etc into an AbstractFileWriter and custom file writer can implement the other methods specific to it's logic, example : writing to HDFS as text files, etc.

 

public interface FileWriter {
 
  /**
   * append events to file
   **/
  void append(Iterator<ILoggingEvent> events)
 
  /**
   * rotate the old file corresponding to the entityId and timestamp and get a new file
   **/
  File rotateFile(File file, EntityId entityId, long timestamp)
 
  /**
   * if file already exists for this entity and timestamp base, then the file would be returned, else new file would be created
   **/
  File getFile(EntityId entityId, long timestamp)
 
  /**
   * close the file
   **/
  void close(File file, long timestamp)
 
  /**
   * close and delete the file
   **/
  void closeAndDelete(File file)
 
  /**
   * flush the contents
   **/
  void flush()
}

public abstract class AbstractFileWriter implements FileWriter {
	
	public File rotateFile(File file, EntityId entityId, long timestamp) {
  		// common-logic for rotating files
	}
	public getFile(EntityId entityId, long timestamp) {
		// common-logic for getting previously files
	}
    // etc..
}

 

Option-1

Log Processor /File Writer Extensions run in the same container as log.saver. 

Lifecycle

1) Log saver will load and initialize the log processor plugin.

2) As Log saver processes the messages, the log processor's process will also be called with logging events.

3) if log processor extension throws an error :

  • we can Stop the plugin (or) 
  • we can log an error and continue and stop the plugin after an error threshold.

4) stop the log processor when log.saver stops.

 

Class-Loading Isolation

1) Should the log processor plugins have separate class-loaders (or) can they share the same ClassLoader as the log.saver system. 

     Having isolation helps with processor extensions to depend on different libraries, but should we allow them ? 

2) If we create separate Class loader - we need to expose the following 

  • cdap-watchdog-api
  • cdap-proto
  • hadoop
  • logback-classic ( we need ILoggingEvent)
  • should we expose more classes ? 
  • What if user wants to write to a kafka server or third-party storage s3 on the log.processor logic? Having separate class loader will help in these scenarios.

 

Sample Custom Log Plugin Implementation 

1) Log Processor would want to process the ILoggingEvent, format it to a log message string (maybe using log-back layout) and write it to a destination.

2) However the configuration for this log.processor cannot be a logback.xml.

  • as there can only be one logback.xml in a JVM and the logback is already configured for the log.saver container.
  • logback doesn't existing implementation for writing to HDFS. 

3) the configuration for logging location (base directory in hdfs) and logging class to use (SizeBasedRolling, etc) could be provided through cdap-site.xml for the extensions. These properties would be passed to the extension during initialize.

4) Log processor extension could provide an implementation of FileWriter interface (or extension of AbstractFileWriter) for HDFSFileWriter logic for the events it has processed using LogProcessor. 

4) Future implementation for other policies have to be implemented at the end of extensions and configured through cdap-site.xml

 

Pros

1) Leverages scalability of log.saver

2) Utilization of existing resources , logic and processing.  Leveraging sorted messaging capability of log.saver for plugins.

3) Makes log saver extendable, to have option to store in different format and have custom logic on filtering.

 

Cons

1) As number of extensions increases and if a processor extension is slow, this could cause performance of log.saver to drop, which will affect the CDAP log.saver performance

 

Option-2 (or) Improvement on Option-1:

 

Configure and Run a separate container for every log.processor plugin. 

Log.saver could have capability to launch system and user plugin containers. Scalability of these plugin containers could be managed separately. 

 

 

 

 

  • No labels