Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Task marked complete

 

Table of Contents
 

Goals

  1. Allow CDAP users to securely store sensitive data.
  2. Allow authorized CDAP users to access stored data at runtime.
  3. Allow authorized CDAP users to manage the stored data.

Checklist

  •  User stories documented (Nishith)
  •  User stories reviewed (Nitin)
  •  Design documented (Nishith)
  •  Design reviewed (Andreas/Terence)
  •  Feature merged (Nishith)
  •  Blog post 

User Stories

  1. As a CDAP/Hydrator security admin, I want all sensitive information like passwords not be stored in plaintext.

 

Scenarios

 

Brief introduction to Hadoop KMS

Hadoop KMS is a cryptographic key management server based on Hadoop’s KeyProvider API.

...

The KMS is a proxy that interfaces with a backing key store on behalf of HDFS daemons and clients. Both the backing key store and the KMS implement the Hadoop KeyProvider API. A default Java Key store is provided for testing but is not recommended for production use. Cloudera provides Navigator Key Trustee for production clusters. Hortonworks recommends using Ranger KMS.


Image Added

*Image taken from Cloudera engineering Blog


Design

 

The entity stored will be composed of three parts

  1. AliasName: This will be the identifier, provided by the user, that will be used to retrieve the object.
  2. Properties: A key value map containing the properties of the object being stored.
  3. DataValue: The data being stored. Passed in as a byte array.

...

  1. string.

Design decisions

  1. Hadoop KMS supports versioning for the keys it stores. This is used mainly for key rollovers. In this release, we won't support versioning.

 

Following Operations operations will supported by the store

  • Store
  • Get data
  • Get metadata
  • List
  • Delete

 

The system will expose these APIs to clients :

Code Block
languagejava
titleSecure Store Programmatic API
// Represents the meta datametadata about the data
interface SecureStoreMetaData {
  String getName();
  String getDescription();
  long getLastModifiedTime();
  Map<String, String> getProperties();
}
 
// Represents the secure data
interface SecureStoreData {
  // Returns the meta data about the secure data
  SecureStoreMetaData getMetaData();
 
  // Returns the secure data
  byte[]String get();
}
 
// Provides read-only access to secure store
interface SecureStore {
  // Returns a list map with names as key and descriptions as the value of available 
  // secure data in the secure store.
  Map<String, List<SecureStoreMetaData>String> list(String namespace);
 
  // Gets the secure data
  SecureStoreData get(String namespace, String name);
}
 
// Manager interface for managing secure data
interface SecureStoreManager {
  // Stores the secure data
  void put(String namespace, String name, byte[]String data, Map<String, String> properties);
 
  // Remove the secure data
  void delete(String namespace, String name);
}

 

REST API

OperationREST APIBodyResponse
PutPOST PUT /v3/securitynamespaces/store<namespace>/v1securekeys/<key-name>

Content-Type: application/json

Code Block
titlePut Data
{
  "name"       
:  "<name>"
  "description" :  "<description>"
  "datavalue"        :  "<data><value>"  //base64
  "properties"  :  {
    "key"  :  "value"
	...
  }
}

201: Created

409: Conflict200 OK

DeleteDELETE /securityv3/storenamespaces/v1<namespace>/keysecurekeys/<key-name>N/A

200 OK

404 Not Found

Get

GET /

security

v3/

store

namespaces/

v1

<namespace>/

key

securekeys/<key-name>

N/A

200 OK

Content-Type: application/json

Code Block
{
  "name"  :  "<name>"
  "data"  :  "<data>"  //base64
}

value

404 Not Found

Get MetadataGET /securityv3/storenamespaces/v1<namespace>/keysecurekeys/<key-name>/metadataN/A

200 OK

Content-Type: application/json

Code Block
{
  "name"        :  "<name>"
  "description" :  "<description>"
  "created"     :  <millis-epoch> //long
  "properties"  :  {
    "key"  :  "value"
	...
  }
}

404 Not Found

ListGET /securityv3/storenamespaces/v1<namespace>/keyssecurekeys/namesN/A

200 OK

Content-Type: application/json

Code Block
[
  {
"<key-name>",
  "<key-name>",
  "<key-name>",	"name"        : "<name>"
	"description" : "<description>"
  }
  {
	"name"        : "<name>"
	"description" : "<description>"

  }
  {
	"name"        : "<name>"
	"description" : "<description>"

  }
  ...
]

Access Control

The secure store can be protected with a key in the CDAP master keystore, which CDAP already requires the user to provide in order to have SSL enabled. Since the program will be executed in the same JVM as the SDK process, access to the sensitive data can be done directly through the proper Guice binding that binds the SecureStore interface to the actual implementation.

KMS uses Hadoop Authentication for HTTP authentication. Hadoop Authentication issues a signed HTTP Cookie once the client has authenticated successfully.

Caching

Hadoop KMS caches keys for a short period of time to avoid excessive hits to the underlying key provider. In the operations we are interested in, only 2 use the cache, get data, and get metadata.

Audit logs

All access to the secure store will be logged. 

 

Access Control

Audit logs are aggregated by KMS for API accesses to the GET_KEY_VERSION, GET_CURRENT_KEY, DECRYPT_EEK, GENERATE_EEK operations.

Entries are grouped by the (user,key,operation) combined key for a configurable aggregation interval after which the number of accesses to the specified end-point by the user for a given key is flushed to the audit log.

Implementation

Following two implementations will be provided

Standalone mode

An implementation using standard Java tools (JKS or JCEKS) will be provided. The secure store will be kept in an encrypted file on the local filesystem.

Distributed mode

The cluster has KMS running

If the cluster has KMS running, we will utilize that for securely storing sensitive information. To do that we will implement the Hadoop KeyProvider API and forward user calls to KMS. The API with the methods that need to be implemented are listed below.

The cluster does not have KMS running

This mode will not be supported in this release.

 Design Decisions:

  1. We will also need to modify the Input class to take Namespaced dataset/streams. This can be achieved in the different ways which are listed below

 

 

Out-of-scope User Stories (4.0 and beyond)

  1. Support for 

...

  1. for secure store in distributed mode when KMS is not present.

References

Secure Store

https://hadoop.apache.org/docs/stable/hadoop-kms/index.html

https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-hdfs/TransparentEncryption.html

https://hadoop.apache.org/docs/r2.7.2/api/org/apache/hadoop/crypto/key/KeyProvider.html