Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents

Checklist

  •  User Stories Documented
  •  User Stories Reviewed
  •  Design Reviewed
  •  APIs reviewed
  •  Release priorities assigned
  •  Test cases reviewed
  •  Blog post

Introduction 

Phase 1 of replication is to support a hot-cold setup where CDAP data is replicated from one cluster to another using existing tools for replicating underlying infrastructure.

Goals

Allow manual failover from a hot cluster to a cold cluster.

User Stories 

  • As a cluster administrator, I want to be able to configure CDAP so that all HBase tables created by CDAP are set up to replicate data to another cluster
  • As a cluster administrator, I want to be able to manually stop CDAP in one cluster and start it in another cluster with the exact same state
  • As a cluster administrator, I want to be able to have a way to know when it is safe to start the cold cluster after the hot one has been shut down

Design

CDAP stores state in several systems:

 

HDFS

  • Transaction snapshots
  • Artifacts (jars)
  • Streams
  • FileSet based datasets
  • Program logs

HBase

  • CDAP entity metadata (program specifications, schedules, run history, metrics, etc.)
  • Table based datasets
  • Kafka offsets for metrics and logs
  • Flow queues
  • Messaging system data

Kafka

  • unprocessed metrics
  • unsaved log messages

Hive

  • Explorable CDAP datasets and their partitions

 

For phase 1, much of the responsiblity for data replication falls to the cluster administrator. It is assumed that replication of HDFS, Hive, and Kafka will be handled by the cluster administrator. HDFS is usually done through regularly scheduled distcp jobs, or by using some distro specific tools, such as Cloudera's Backup and Data Recovery (http://www.cloudera.com/documentation/enterprise/latest/topics/cm_bdr_about.html). Kafka can be done using MirrorMaker. Hive can be done by replicating the data (HDFS and/or HBase), and by replication the metastore through whatever replication mechanisms are available to the relational DB behind backing the metastore. All of this can be setup outside of CDAP.

One thing CDAP needs to ensure is that there are no cluster specific values in any of the metadata. For example, the namenode should not be in any of the system metadata, otherwise things will fail when the data is replicated over to the slave and the slave is started.

HBase DDL Design

HBase , however, DDL will require some hooks in CDAP, because replication must be setup for every table when it is created, and before any data is written to it. CDAP will define an interface to create, modify, and delete HBase tables.  By default, it will be implemented by the current code, which only creates tables in the local HBase instance.  Another implementation can be used by setting a property in cdap-site.xml that specifies the class to use. The jar containing the class must be included in the cdap classpath.  This custom class could, for example, make an http call to an external service to create the needed hbase tables.

Code Block
Interface to be put here

 

Design details are at HBase DDL SPI.

Replication Status

Cluster administrators will require a way to tell when it is safe for a cold cluster to be started up. In other words, they need to be able to tell when all necessary data has been replicated. HBase shell already includes a command that helps:

Code Block
hbase(main):030:0> status 'replication', 'source'
version 1.1.2.2.3.4.7-4
1 live servers
    [hostname]:
       SOURCE: PeerID=1, AgeOfLastShippedOp=29312, SizeOfLogQueue=0, TimeStampsOfLastShippedOp=Thu Nov 10 22:51:55 UTC 2016, Replication Lag=29312

HBase also includes a mapreduce job that can be used to verify replicated data (https://hbase.apache.org/book.html#_verifying_replicated_data).  It must be run on the master cluster.

Code Block
$ HADOOP_CLASSPATH=`hbase classpath` hadoop jar /usr/hdp/current/hbase-master/lib/hbase-server-1.1.2.2.3.4.7-4.jar verifyrep <peer id> <table>
...
	Map-Reduce Framework
		Map input records=1
		Map output records=0
		Input split bytes=103
		Spilled Records=0
		Failed Shuffles=0
		Merged Map outputs=0
		GC time elapsed (ms)=64
		CPU time spent (ms)=1810
		Physical memory (bytes) snapshot=255139840
		Virtual memory (bytes) snapshot=916021248
		Total committed heap usage (bytes)=287309824
	org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication$Verifier$Counters
		BADROWS=1
		CONTENT_DIFFERENT_ROWS=1

Under the HBase counters, you only want to see the GOODROWS counter, and not BADROWS or CONTENT_DIFFERENT_ROWS.

Design details are at Replication Status Tool

Kafka offset mismatches

MirrorMaker is not much more than a Kafka client that consumes from source topics and writes the same messages to some destination. As such, partitions and offsets are not guaranteed to be the sameMirrorMaker does not offer any guarantees about a message from the source being written to the same partition and offset in the destination. The log saver , and metrics processor store Kafka offsets per topic partition in HBase, and their corresponding fetch endpoints will need to be able modified to handle the fact that Kafka offsets can be different in the hot and cold clusters.

Approach

Approach #1

Approach #2

a cluster failover. Design details are at Resolving Kafka Offset Mismatches.

API changes

New Programmatic APIs

New Java APIs introduced (both user facing and internal)See HBase DDL SPI

Deprecated Programmatic APIs

No programmatic APIs will be deprecated

New REST APIs

PathMethodDescriptionResponse CodeResponse
/v3/apps/<app-id>GETReturns the application spec for a given application

200 - On success

404 - When application is not available

500 - Any internal errors

 

     

Deprecated REST API

PathMethodDescription
/v3/apps/<app-id>GETReturns the application spec for a given application

CLI Impact or Changes

  • Impact #1
  • Impact #2
  • Impact #3

    No new REST APIs will be added to the platform. There may be new REST APIs used by an external service used to handle table DDL and replication.

    Deprecated REST API

    No REST APIs will be deprecated

    CLI Impact or Changes

    • No Changes

    UI Impact or Changes

    • Impact #1
    • Impact #2
    • Impact #3

    Security Impact 

    What's the impact on Authorization and how does the design take care of this aspect
    • No Changes

    Security Impact 

    Cluster administrators are responsible for setting up replication, but we should understand what is required from a security perspective to replicate hdfs and hbase data.

    Impact on Infrastructure Outages 

    System behavior (if applicable - document impact on downstream [ YARN, HBase etc ] component failures) and how does the design take care of these aspectWith replication, there is now another cluster that is required. If the cold cluster suffers an outage, replication will eventually catch up once service is restored. This assumes the outage lasts is for a shorter duration of time than how long HBase keeps events in its WAL.

    Test Scenarios

    Test IDTest DescriptionExpected Results
       
       
       
       

    Releases

    Release 4.

    0

    1.

    0Release X.Y.Z

    Phase 1 work is scheduled for release 4.1.0.

    Related Work

    • Work #1
    • Work #2
    • Work #3

     

    Future work