Reinforcement Learning

From TCS Wiki
Revision as of 01:25, 5 April 2017 by imported>Notfruit (resized image for real this time)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Reinforcement learning is teaching a software agent how to behave in an environment by telling it how good it's doing. It is an area of machine learning inspired by behaviorist psychology.

Reinforcement learning is different from supervised learning because the correct inputs and outputs are never shown. Also, reinforcement learning usually learns as it goes (online learning) unlike supervised learning. This means an agent has to choose between exploring and sticking with what it knows best.

Introduction

File:Rl agent.png

A reinforcement learning system is made of a policy ([math]\displaystyle{ \pi }[/math]), a reward function ([math]\displaystyle{ R }[/math]), a value function ([math]\displaystyle{ v }[/math]), and an optional model of the environment.

A policy tells the agent what to do in a certain situation. It can be a simple table of rules, or a complicated search for the correct action. Policies can even be stochastic, which means instead of rules the policy assigns probabilities to each action. A policy by itself can make an agent do things, but it can't learn on its own.

A reward function defines the goal for an agent. It takes in a state (or a state and the action taken at that state) and gives back a number called the reward, which tells the agent how good it is to be in that state. The agent's job is to get the biggest amount of reward it possibly can in the long run. If an action yields a low reward, the agent will probably take a better action in the future. Biology uses reward signals like pleasure or pain to make sure organisms stay alive to reproduce. Reward signals can also be stochastic, like a slot machine at a casino, where sometimes they pay and sometimes they don't.

A value function tells an agent how much reward it will get following a policy [math]\displaystyle{ \pi }[/math] starting from state [math]\displaystyle{ s }[/math]. It represents how desirable it is to be in a certain state. Since the value function isn't given to the agent directly, it needs to come up with a good guess or estimate based on the reward it's gotten so far. Value function estimation is the most important part of most reinforcement learning algorithms.

A model is the agent's mental copy of the environment. It's used to plan future actions.

Knowing this, we can talk about the main loop for a reinforcement learning episode. The agent interacts with the environment in discrete time steps. Think of it like the "tick-tock" of a clock. With discrete time, things only happen during the "ticks" and the "tocks", and not in between. At each time [math]\displaystyle{ t=0, 1, 2, 3,... }[/math], the agent observes the environment's state [math]\displaystyle{ S_t }[/math] and picks an action [math]\displaystyle{ A_t }[/math] based on a policy [math]\displaystyle{ \pi }[/math]. The next time step, the agent receives a reward signal [math]\displaystyle{ R_{t+1} }[/math] and a new observation [math]\displaystyle{ S_{t+1} }[/math]. The value function [math]\displaystyle{ v(S_t) }[/math] is updated using the reward. This continues until a terminal state [math]\displaystyle{ S_T }[/math] is reached