e0077c38a9 | ||
---|---|---|
.. | ||
docs/images | ||
rl4j-ale | ||
rl4j-api | ||
rl4j-core | ||
rl4j-doom | ||
rl4j-gym | ||
rl4j-malmo | ||
README.md | ||
pom.xml |
README.md
RL4J: Reinforcement Learning for Java
RL4J is a reinforcement learning framework integrated with deeplearning4j and released under an Apache 2.0 open-source license. By contributing code to this repository, you agree to make your contribution available under an Apache 2.0 license.
- DQN (Deep Q Learning with double DQN)
- Async RL (A3C, Async NStepQlearning)
Both for Low-Dimensional (array of info) and high-dimensional (pixels) input.
Here is a useful blog post I wrote to introduce you to reinforcement learning, DQN and Async RL:
Disclaimer
This is a tech preview and distributed as is. Comments are welcome on our gitter channel: gitter
Quickstart
- mvn install
Visualisation
Quicktry cartpole:
- run with this main
Doom
Doom is not ready yet but you can make it work if you feel adventurous with some additional steps:
- You will need vizdoom, compile the native lib and move it into the root of your project in a folder
- export MAVEN_OPTS=-Djava.library.path=THEFOLDEROFTHELIB
- mvn compile exec:java -Dexec.mainClass="YOURMAINCLASS"
Malmo (Minecraft)
- Download and unzip Malmo from here
- export MALMO_HOME=YOURMALMO_FOLDER
- export MALMO_XSD_PATH=$MALMO_HOME/Schemas
- launch malmo per instructions
- run with this main
WIP
- Documentation
- Serialization/Deserialization (load save)
- Compression of pixels in order to store 1M state in a reasonnable amount of memory
- Async learning: A3C and nstep learning (requires some missing features from dl4j (calc and apply gradients)).
Author
Proposed contribution area:
- Continuous control
- Policy Gradient
- Update rl4j-gym to make it compatible with pixels environments to play with Pong, Doom, etc ...