Skip to content
forked from tspooner/rsrl

A fast, safe and easy to use reinforcement learning framework in Rust.

License

Notifications You must be signed in to change notification settings

infinite-Joy9l/rsrl

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

RSRL (api)

Crates.io Build Status Coverage Status

Reinforcement learning should be fast, safe and easy to use.

Overview

rsrl provides generic constructs for running reinforcement learning (RL) experiments by providing a simple, extensible framework and efficient implementations of existing methods for rapid prototyping.

Installation

[dependencies]
rsrl = "0.6"

Usage

The code below shows how one could use rsrl to evaluate a QLearning agent using a linear function approximator with Fourier basis projection to solve the canonical mountain car problem.

See examples/ for more...

extern crate rsrl;
#[macro_use]
extern crate slog;

use rsrl::{
    control::td::QLearning,
    core::{make_shared, run, Evaluation, Parameter, SerialExperiment},
    domains::{Domain, MountainCar},
    fa::{basis::{Composable, fixed::Fourier}, LFA},
    geometry::Space,
    logging,
    policies::fixed::{EpsilonGreedy, Greedy, Random},
};

fn main() {
    let domain = MountainCar::default();
    let mut agent = {
        let n_actions = domain.action_space().card().into();

        let bases = Fourier::from_space(3, domain.state_space()).with_constant();
        let q_func = make_shared(LFA::vector(bases, n_actions));

        let policy = EpsilonGreedy::new(
            Greedy::new(q_func.clone()),
            Random::new(n_actions),
            Parameter::exponential(0.3, 0.001, 0.99),
        );

        QLearning::new(q_func, policy, 0.001, 0.99)
    };

    let logger = logging::root(logging::stdout());
    let domain_builder = Box::new(MountainCar::default);

    // Training phase:
    let _training_result = {
        // Start a serial learning experiment up to 1000 steps per episode.
        let e = SerialExperiment::new(&mut agent, domain_builder.clone(), 1000);

        // Realise 1000 episodes of the experiment generator.
        run(e, 1000, Some(logger.clone()))
    };

    // Testing phase:
    let testing_result = Evaluation::new(&mut agent, domain_builder).next().unwrap();

    info!(logger, "solution"; testing_result);
}

Contributing

Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.

Please make sure to update tests as appropriate and adhere to the angularjs commit message conventions (see here).

License

MIT

About

A fast, safe and easy to use reinforcement learning framework in Rust.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Rust 99.1%
  • RenderScript 0.9%