Our dream is creating real AI who is suitable for all domains, here are some papers we high recommanded
- Auto-Encoding Variational Bayes[
ORIGINAL
] - vq-vae2 [
SOTA
] - Adversarial Collaboration: Joint Unsupervised Learning of Depth, Camera Motion, Optical Flow and Motion Segmentation
- Learning Disentangled Joint Continuous and Discrete Representations
- Understanding disentangling in β-VAE
- Isolating Sources of Disentanglement in VAEs
- Ladder Variational Autoencoders
- Disentangling by Factorising[
FACTOR-VAE
] - VARIATIONAL INFERENCE OF DISENTANGLED LATENT CONCEPTS FROM UNLABELED OBSERVATIONS[
DIPVAE
]
- Recent Advances in Autoencoder-Based Representation Learning
- Are Disentangled Representations Helpful for Abstract Visual Reasoning?
- f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization
- INFO-GAN
- WGAN
- Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks
- Learning Abstract Options
- Successor Options: An Option Discovery Framework for Reinforcement Learning
- LEARNING AWARENESS MODELS ICLR 2018
- Curiosity-driven Exploration by Self-supervised Prediction ICML 2017 [
curiosity
][ICM
] - Learning Latent Dynamics for Planning from Pixels ICML 2018 [
MPC
][VAE
] [planet
] - Dynamics-Aware Unsupervised Discovery of Skills
- INFOBOT
- EMI
- Unsupervised Discovery of Decision States for Transfer in Reinforcement Learning
-
Unsupervised Discovery of Decision States for Transfer in Reinforcement Learning
-
[Contrastive Bidirectional Transformer for Temporal Representation Learning]
-
[Contrastive Multiview Coding]
-
Infobot
-
DIM
-
cpc
-
EMI
-
MINE
-
Learning Belief Representations for Imitation Learning in POMDPs
-
INFORMATION ASYMMETRY IN KL-REGULARIZED RL
-
Exploiting Hierarchy for Learning and Transfer in KL-regularized RL
-
Learning to Share and Hide Intentions using Information Regularization
Learning a predictor and use predict error as internal reward, and they jointly train an inverse dynamics model for encoder, who project observation to a space that is invariant to parts of the environment that do not affect the agent or task.
- Learn the representation of states and the action such that the representation of the corresponding next state following linear dynamics
- Intrinsic reward augmentation
- https://github.com/snu-mllab/EMI
In this work we offer an information-theoretic framework for representation learn- ing that connects with a wide class of existing objectives in machine learning. We develop a formal correspondence between this work and thermodynamics and dis- cuss its implications.
propose a new family of hybrid models that combines the strength of both supervised learning (SL) and reinforcement learning (RL), trained in a joint fashion: The SL component can be a recurrent neural networks (RNN) or its long short-term memory (LSTM) version, which is equipped with the desired property of being able to capture long-term dependency on history, thus providing an effective way of learning the representation of hidden states. The RL component is a deep Q-network (DQN) that learns to optimize the control for maximizing long-term rewards. Extensive experiments in a direct mailing campaign problem demonstrate the effectiveness and advantages of the proposed approach
Graphical models, Information Bottleneck and Unsupervised Skill Learning