Goals
- Allow CDAP users to securely store sensitive data.
- Allow authorized CDAP users to access stored data at runtime.
- Allow authorized CDAP users to manage the stored data.
Checklist
- User stories documented (Nishith)
- User stories reviewed (Nitin)
- Design documented (Nishith)
- Design reviewed (Andreas/Terence)
- Feature merged (Nishith)
- Blog post
User Stories
- As a CDAP/Hydrator security admin, I want all sensitive information like passwords not be stored in plaintext.
Scenarios
Brief introduction to Hadoop KMS
Hadoop KMS is a cryptographic key management server based on Hadoop’s KeyProvider API.
It provides a client and a server components which communicate over HTTP using REST API.
The client is a KeyProvider implementation which interacts with the KMS using the KMS HTTP REST API.
The KMS is a proxy that interfaces with a backing key store on behalf of HDFS daemons and clients. Both the backing key store and the KMS implement the Hadoop KeyProvider API. A default Java Key store is provided for testing but is not recommended for production use. Cloudera provides Navigator Key Trustee for production clusters. Hortonworks recommends using Ranger KMS.
Design
The entity stored will be composed of three parts
- Alias: This will be the identifier, provided by the user, that will be used to retrieve the object.
- Properties: A key value map containing the properties of the object being stored.
- Data: The data being stored. Passed in as a byte array.
Following operations will supported by the store
- Store
- Get data
- Get metadata
- List
- Delete
The system will expose APIs to clients
// Represents the meta data about the data interface SecureStoreMetaData { String getName(); String getDescription(); long getLastModifiedTime(); Map<String, String> getProperties(); } // Represents the secure data interface SecureStoreData { // Returns the meta data about the secure data SecureStoreMetaData getMetaData(); // Returns the secure data byte[] get(); } // Provides read-only access to secure store interface SecureStore { // Returns a list of available secure data in the secure store. List<SecureStoreMetaData> list(); // Gets the secure data SecureStoreData get(String name); } // Manager interface for managing secure data interface SecureStoreManager { // Stores the secure data void put(String name, byte[] data, Map<String, String> properties); // Remove the secure data void delete(String name); }
REST API
Operation | REST API | Body | Response |
---|---|---|---|
Put | POST /security/store/v1/key | Content-Type: application/json Put Data { "name" : "<name>" "description" : "<description>" "data" : "<data>" //base64 "properties" : { "key" : "value" ... } } | 201 Created 409 Conflict |
Delete | DELETE /security/store/v1/key/<key-name> | N/A | 200 OK 404 Not Found |
Get | GET /security/store/v1/key/<key-name> | N/A | 200 OK Content-Type: application/json { "name" : "<name>" "data" : "<data>" //base64 } 404 Not Found |
Get Metadata | GET /security/store/v1/key/<key-name>/metadata | N/A | 200 OK Content-Type: application/json { "name" : "<name>" "description" : "<description>" "created" : <millis-epoch> //long "properties" : { "key" : "value" ... } } 404 Not Found |
List | GET /security/store/v1/keys/names | N/A | 200 OK Content-Type: application/json [ "<key-name>", "<key-name>", "<key-name>", ... ] |
Access Control
The keystore can be protected with a key in the CDAP master keystore, which CDAP already requires the user to provide in order to have SSL enabled. Since program will be executed in the same JVM as the SDK process, accessing to the sensitive data directly through the proper Guice binding that binds the SecureStore
interface to the actual implementation.
Caching
Hadoop KMS caches keys for a short period of time to avoid excessive hits to the underlying key provider. In the operations we are interested in only 2 use the cache, get data, and get metadata.
Audit logs
All access to the secure store will be logged.
Implementation
Two implementations will be provided
Standalone mode
An implementation using standard Java tools (JKS or JCEKS) will be provided. The key store will be stored in a file on the local filesystem.
The cluster has KMS running
If the cluster has KMS running, we will utilize that for securely storing sensitive information. To do that we will implement the Hadoop KeyProvider API and forward user calls to KMS.
The cluster does not have KMS running
This mode will not be supported in this release.
Design Decisions:
- We will also need to modify the Input class to take Namespaced dataset/streams. This can be achieved in the different ways which are listed below
Out-of-scope User Stories (4.0 and beyond)
- Support for
References
https://hadoop.apache.org/docs/stable/hadoop-kms/index.html
https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-hdfs/TransparentEncryption.html
https://hadoop.apache.org/docs/r2.7.2/api/org/apache/hadoop/crypto/key/KeyProvider.html