Note: we're lovingly marking this project as Archived since we're no longer supporting it. You are welcome to read the code and fork your own version of it and continue to use this code under the terms of the project license.
CaffeOnSpark
What's CaffeOnSpark?
CaffeOnSpark brings deep learning to Hadoop and Spark clusters. By combining salient features from deep learning framework Caffe and big-data frameworks Apache Spark and Apache Hadoop, CaffeOnSpark enables distributed deep learning on a cluster of GPU and CPU servers.
As a distributed extension of Caffe, CaffeOnSpark supports neural network model training, testing, and feature extraction. Caffe users can now perform distributed learning using their existing LMDB data files and minorly adjusted network configuration (as illustrated).
CaffeOnSpark is a Spark package for deep learning. It is complementary to non-deep learning libraries MLlib and Spark SQL. CaffeOnSpark's Scala API provides Spark applications with an easy mechanism to invoke deep learning (see sample) over distributed datasets.
CaffeOnSpark provides some important benefits (see our blog) over alternative deep learning solutions.
It enables model training, test and feature extraction directly on Hadoop datasets stored in HDFS on Hadoop clusters.
It turns your Hadoop or Spark cluster(s) into a powerful platform for deep learning, without the need to set up a new dedicated cluster for deep learning separately.
Server-to-server direct communication (Ethernet or InfiniBand) achieves faster learning and eliminates scalability bottleneck.
Caffe users' existing datasets (e.g. LMDB) and configurations could be applied for distributed learning without any conversion needed.
High-level API empowers Spark applications to easily conduct deep learning.
Incremental learning is supported to leverage previously trained models or snapshots.
Additional data formats and network interfaces could be easily added.
It can be easily deployed on public cloud (ex. AWS EC2) or a private cloud.