If you post to either the mailing list or the github, please use this format:
Version: Version of the software you're running Type of Neural Net: RBM, DBN, RNTN, Word2Vec Configuration: Your neural net configuration. This can be code or a reasonable list. Data: Description of the data you're using. Number of examples: Number of examples used Description of problem: Your specific problem goes here.
Deep learning is a form of state-of-the-art machine learning that can learn to recognize patterns in data unsupervised.
Unsupervised pattern recognition saves time during data analysis, trend discovery and labeling of certain types of data, such as image, text, sound and time series.
See Deeplearning4j.org for applications, tutorials, definitions and other resources on the discipline.
See our downloads page: http://deeplearning4j.org/downloads
-
Distributed deep learning via Akka clustering and distributed coordination of jobs via Hazelcast with configurations stored in Apache Zookeeper.
-
Various data-preprocessing tools, such as an image loader that allows for binarization, scaling of pixels, normalization via zero-unit mean and standard deviation.
-
Deep-belief networks for both continuous and binary data. Support for sequential via moving window/viterbi
-
Native matrices via Jblas, a Fortran library for matrix computations. As 1.2.4 - GPUs when nvblas is present.
-
Automatic cluster provisioning for Amazon Web Services' Elastic Compute Cloud (EC2).
-
Baseline ability to read from a variety of input providers including S3 and local file systems.
-
Text processing via Word2Vec as well as a term frequency–inverse document frequency (TFIDF) vectorizer.
- Special tokenizers/stemmers and a SentenceIterator interface to make handling text input agnostic.
- Ability to do moving-window operations via a Window encoding. Optimized with Viterbi.
L2 Regualarization
Dropout
Adagrad
Momentum
Optimization algorithms for training (Conjugate Gradient, Stochastic Gradient Descent)
Different kinds of activation functions (Tanh, Sigmoid, HardTanh, Softmax)
Normalization by input rows, or not
Sparsity (force activations of sparse/rare inputs)
Weight transforms (useful for deep autoencoders)
Different kinds of loss functions - squared loss, reconstruction cross entropy, negative log likelihood
Probability distribution manipulation for initial weight generation
See our wiki: https://github.com/agibsonccc/java-deeplearning/wiki/Deeplearning4j-Roadmap
Deeplearning4j has its own IRC channel at https://webchat.freenode.net/, a network intended primarily for developers of free and open source software. Just enter /join #deeplearning4j on IRC where you would normally type to chat. Alternatively, we're also reachable through Skymind's contact page.
Initial deployment instructions:
-
git clone [email protected]:MysterionRise/mavenized-jcuda.git
-
cd mavenized-jcuda && mvn clean install -DskipTests
-
Include the linear-algebra-jcublas in your pom:
<dependency> <groupId>org.nd4j</groupId> <artifactId>nd4j-jcublas</artifactId> <version>0.0.3.2-SNAPSHOT</version> </dependency>
https://github.com/SkymindIO/deeplearning4j/issues
It is highly reccommended that you use development snapshots right now.
Put this snippet in your POM and use the dependencies as versioned below.
<repositories>
<repository>
<id>snapshots-repo</id>
<url>https://oss.sonatype.org/content/repositories/snapshots</url>
<releases><enabled>false</enabled></releases>
<snapshots><enabled>true</enabled></snapshots>
</repository>
</repositories>
<dependency>
<groupId>org.deeplearning4j</groupId>
<artifactId>deeplearning4j-core</artifactId>
<version>0.0.3.2.5</version>
</dependency>
<dependency>
<groupId>org.deeplearning4j</groupId>
<artifactId>deeplearning4j-scaleout-akka</artifactId>
<version>0.0.3.2.5</version>
</dependency>
<dependency>
<groupId>org.deeplearning4j</groupId>
<artifactId>deeplearning4j-nlp</artifactId>
<version>0.0.3.2.5</version>
</dependency>