By Lutz Frommberger

ISBN-10: 3642165893

ISBN-13: 9783642165894

Reinforcement studying has constructed as a profitable studying procedure for domain names that aren't totally understood and which are too advanced to be defined in closed shape. even if, reinforcement studying doesn't scale good to massive and non-stop difficulties. additionally, got wisdom particular to the realized job, and move of information to new initiatives is crucial.

In this booklet the writer investigates no matter if deficiencies of reinforcement studying will be conquer through appropriate abstraction tools. He discusses quite a few types of spatial abstraction, particularly qualitative abstraction, a kind of representing wisdom that has been completely investigated and effectively utilized in spatial cognition study. along with his strategy, he exploits spatial buildings and structural similarity to aid the educational approach by means of abstracting from less significant beneficial properties and stressing the basic ones. the writer demonstrates his studying process and the transferability of information through having his method study in a digital robotic simulation method and therefore move the received wisdom to a actual robotic. The process is motivated via findings from cognitive technological know-how.

The publication is appropriate for researchers operating in man made intelligence, particularly wisdom illustration, studying, spatial cognition, and robotics.

Show description

Read or Download Qualitative Spatial Abstraction in Reinforcement Learning PDF

Best robotics & automation books

Innovations in Robot Mobility and Control

There exists rather an unlimited literature on cellular robots, overlaying basic rules on movement keep an eye on and path-planning in indoor environments utilizing ultrasonic/laser transducers. although, there's a shortage of books/collective records on imaginative and prescient established navigation of cellular robots and multi-agent structures.

The ROV Manual: A User Guide for Observation Class Remotely Operated Vehicles

Many underwater operations that have been as soon as performed by means of divers can now be conducted extra successfully and with much less danger with Remotely Operated autos (ROVs). this is often the 1st ROV 'how-to' handbook for these concerned with smaller statement type ROVs used for surveying, inspection, commentary and learn reasons.

Control in robotics and automation : sensor-based integration

Microcomputer know-how and micromechanical layout have contributed to fresh speedy advances in Robotics. specific advances were made in sensor know-how that permit robot structures to assemble info and react "intelligently" in versatile production structures. The research and recording of the knowledge are very important to controlling the robotic.

Vehicle Dynamics and Control

Car Dynamics and regulate presents a finished assurance of auto keep an eye on platforms and the dynamic types utilized in the improvement of those keep watch over platforms. The keep an eye on process functions coated within the booklet comprise cruise keep an eye on, adaptive cruise keep an eye on, ABS, computerized lane preserving, automatic street structures, yaw balance keep an eye on, engine keep an eye on, passive, energetic and semi-active suspensions, tire-road friction coefficient estimation, rollover prevention, and hybrid electrical autos.

Extra info for Qualitative Spatial Abstraction in Reinforcement Learning

Sample text

Reinforcement learning takes place in repetitive cycles which we call episodes. An episode begins with the agent being in an initial state sS ∈ S. It repeatedly chooses an action a ∈ A which brings it to the next state and delivers a reinforcement signal r. An episode ends when the agent reaches a terminal state. Another possibility to end an episode is if a previously defined number of steps H, the so-called horizon, has been executed without reaching a terminal state. 3 Markov Decision Processes The action selection in a state s is not trivial, because the goal is to choose the action that maximizes the received reward over the whole sequence of actions and not just the reinforcement given in a particular state.

For every state s ∈ S only the corresponding value of the representative V (κ(s)) is stored. As the boundaries between the abstract states are arbitrary, this is not a reasonable method to approximate a value function. Generalization occurs over states that are mapped to the same entity, but only and exclusively within these states, while directly neighboring states might not be targets of generalization. 1. 5). So state space coarsening is generally too rough to work as a successful value function approximation.

However, this work focuses on the most popular of the TD methods, Q-learning, which is described in Sect. 3. 7) are called 1-step backups, as updates are only applied to the previous state value V (s) based on δ . All other states that have been visited before s will not get updated, despite having contributed to reaching the state s the agent is actually in. So for a state s we only regard rewards that will be given one time step in the future. Analogously, backups that regard more than one future state are called 2-step, 3-step, or generally n-step backups.

Download PDF sample

Qualitative Spatial Abstraction in Reinforcement Learning by Lutz Frommberger


by Brian
4.5

Rated 4.84 of 5 – based on 33 votes