From 42d0d9498fee9b75a9b07fb572a8570b6e4a0fdc Mon Sep 17 00:00:00 2001 From: "Documenter.jl" Date: Sat, 16 Dec 2023 17:00:20 +0000 Subject: [PATCH] build based on a381ab0 --- dev/discrete_explicit/index.html | 2 +- dev/index.html | 2 +- dev/quick/index.html | 2 +- dev/search/index.html | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/dev/discrete_explicit/index.html b/dev/discrete_explicit/index.html index e2c49d5..d1f762c 100644 --- a/dev/discrete_explicit/index.html +++ b/dev/discrete_explicit/index.html @@ -36,4 +36,4 @@ end end -m = DiscreteExplicitPOMDP(S,A,O,T,Z,R,γ)

Constructor Documentation

QuickPOMDPs.DiscreteExplicitMDPType
DiscreteExplicitMDP(S,A,T,R,γ,[p₀],[terminals=Set()])

Create an MDP defined by the tuple (S,A,T,R,γ).

Arguments

Required

  • S,A: State and action spaces (typically Vectors)
  • T::Function: Transition probability distribution function; $T(s,a,s')$ is the probability of transitioning to state $s'$ from state $s$ after taking action $a$.
  • R::Function: Reward function; $R(s,a)$ is the reward for taking action $a$ in state $s$.
  • γ::Float64: Discount factor.

Optional

  • p₀=Uniform(S): Initial state distribution (See POMDPModelTools.Deterministic and POMDPModelTools.SparseCat for other options).

Keyword

  • terminals=Set(): Set of terminal states. Once a terminal state is reached, no more actions can be taken or reward received.
source
QuickPOMDPs.DiscreteExplicitPOMDPType
DiscreteExplicitPOMDP(S,A,O,T,Z,R,γ,[b₀],[terminals=Set()])

Create a POMDP defined by the tuple (S,A,O,T,Z,R,γ).

Arguments

Required

  • S,A,O: State, action, and observation spaces (typically Vectors)
  • T::Function: Transition probability distribution function; $T(s,a,s')$ is the probability of transitioning to state $s'$ from state $s$ after taking action $a$.
  • Z::Function: Observation probability distribution function; $O(a, s', o)$ is the probability of receiving observation $o$ when state $s'$ is reached after action $a$.
  • R::Function: Reward function; $R(s,a)$ is the reward for taking action $a$ in state $s$.
  • γ::Float64: Discount factor.

Optional

  • b₀=Uniform(S): Initial belief/state distribution (See POMDPModelTools.Deterministic and POMDPModelTools.SparseCat for other options).

Keyword

  • terminals=Set(): Set of terminal states. Once a terminal state is reached, no more actions can be taken or reward received.
source

Usage from Python

The Discrete Explicit interface can be used from python via pyjulia. See examples/tiger.py for an example.

+m = DiscreteExplicitPOMDP(S,A,O,T,Z,R,γ)

Constructor Documentation

QuickPOMDPs.DiscreteExplicitMDPType
DiscreteExplicitMDP(S,A,T,R,γ,[p₀],[terminals=Set()])

Create an MDP defined by the tuple (S,A,T,R,γ).

Arguments

Required

  • S,A: State and action spaces (typically Vectors)
  • T::Function: Transition probability distribution function; $T(s,a,s')$ is the probability of transitioning to state $s'$ from state $s$ after taking action $a$.
  • R::Function: Reward function; $R(s,a)$ is the reward for taking action $a$ in state $s$.
  • γ::Float64: Discount factor.

Optional

  • p₀=Uniform(S): Initial state distribution (See POMDPModelTools.Deterministic and POMDPModelTools.SparseCat for other options).

Keyword

  • terminals=Set(): Set of terminal states. Once a terminal state is reached, no more actions can be taken or reward received.
source
QuickPOMDPs.DiscreteExplicitPOMDPType
DiscreteExplicitPOMDP(S,A,O,T,Z,R,γ,[b₀],[terminals=Set()])

Create a POMDP defined by the tuple (S,A,O,T,Z,R,γ).

Arguments

Required

  • S,A,O: State, action, and observation spaces (typically Vectors)
  • T::Function: Transition probability distribution function; $T(s,a,s')$ is the probability of transitioning to state $s'$ from state $s$ after taking action $a$.
  • Z::Function: Observation probability distribution function; $O(a, s', o)$ is the probability of receiving observation $o$ when state $s'$ is reached after action $a$.
  • R::Function: Reward function; $R(s,a)$ is the reward for taking action $a$ in state $s$.
  • γ::Float64: Discount factor.

Optional

  • b₀=Uniform(S): Initial belief/state distribution (See POMDPModelTools.Deterministic and POMDPModelTools.SparseCat for other options).

Keyword

  • terminals=Set(): Set of terminal states. Once a terminal state is reached, no more actions can be taken or reward received.
source

Usage from Python

The Discrete Explicit interface can be used from python via pyjulia. See examples/tiger.py for an example.

diff --git a/dev/index.html b/dev/index.html index 15d580e..048b5cd 100644 --- a/dev/index.html +++ b/dev/index.html @@ -1,2 +1,2 @@ -QuickPOMDPs.jl · QuickPOMDPs

QuickPOMDPs.jl

QuickPOMDPs is a package that makes defining Markov decision processes (MDPs) and partially observable Markov decision processes easy.

The models defined with QuickPOMDPs are compatible with POMDPs.jl and can be used with any solvers from that ecosystem that are appropriate for the problem.

Defining a model with QuickPOMDPs does not require any object-oriented programming, so this package may be easier for some users (e.g. those coming from MATLAB) to pick up than POMDPs.jl itself.

QuickPOMDPs contains two interfaces:

  1. The Discrete Explicit Interface, is suitable for problems with small discrete state, action, and observation spaces. This interface is pedagogically useful because each element of the $(S, A, O, R, T, Z, \gamma)$ tuple for a POMDP and $(S, A, R, T, \gamma)$ tuple for an MDP is defined explicitly in a straightforward manner. See the Discrete Explicit Interface page for more details.
  2. The Quick Interface is much more flexible, exposing nearly all of the features of POMDPs.jl as constructor keyword arguments. See the Quick Interface page for more details.

See the Solver tutorials in the POMDPExamples package for examples of how to solve and simulate problems defined with QuickPOMDPs.

Contents

+QuickPOMDPs.jl · QuickPOMDPs

QuickPOMDPs.jl

QuickPOMDPs is a package that makes defining Markov decision processes (MDPs) and partially observable Markov decision processes easy.

The models defined with QuickPOMDPs are compatible with POMDPs.jl and can be used with any solvers from that ecosystem that are appropriate for the problem.

Defining a model with QuickPOMDPs does not require any object-oriented programming, so this package may be easier for some users (e.g. those coming from MATLAB) to pick up than POMDPs.jl itself.

QuickPOMDPs contains two interfaces:

  1. The Discrete Explicit Interface, is suitable for problems with small discrete state, action, and observation spaces. This interface is pedagogically useful because each element of the $(S, A, O, R, T, Z, \gamma)$ tuple for a POMDP and $(S, A, R, T, \gamma)$ tuple for an MDP is defined explicitly in a straightforward manner. See the Discrete Explicit Interface page for more details.
  2. The Quick Interface is much more flexible, exposing nearly all of the features of POMDPs.jl as constructor keyword arguments. See the Quick Interface page for more details.

See the Solver tutorials in the POMDPExamples package for examples of how to solve and simulate problems defined with QuickPOMDPs.

Contents

diff --git a/dev/quick/index.html b/dev/quick/index.html index 6aced3c..306defe 100644 --- a/dev/quick/index.html +++ b/dev/quick/index.html @@ -57,4 +57,4 @@ POMDPModelTools.ordered_states(m::typeof(m)) = 1:3

or

m = QuickMDP(:myproblem, ...)
 
-POMDPModelTools.ordered_states(m::QuickMDP{:myproblem}) = 1:3
Note

A manually-specified ID must be a suitable type parameter value such as a Symbol or other isbits type.

State, action, and observation type inference

The state, action, and observation types for a Quick(PO)MDP are usually inferred from the keyword arguments. For instance, in the example above, the state type is inferred to be Tuple{Float64, Float64} from the initialstate argument, and the action type is inferred to be Float64 from the actions argument. If QuickPOMDPs is unable to infer one of these types, or the user wants to override or specify the type manually, the statetype, actiontype, or obstype keywords should be used.

Visualization

Visualization can be accomplished using the render keyword argument. See the documentation of POMDPModelTools.render for more information. An example can be found in example/mountaincarwithvisualization.jl.

+POMDPModelTools.ordered_states(m::QuickMDP{:myproblem}) = 1:3
Note

A manually-specified ID must be a suitable type parameter value such as a Symbol or other isbits type.

State, action, and observation type inference

The state, action, and observation types for a Quick(PO)MDP are usually inferred from the keyword arguments. For instance, in the example above, the state type is inferred to be Tuple{Float64, Float64} from the initialstate argument, and the action type is inferred to be Float64 from the actions argument. If QuickPOMDPs is unable to infer one of these types, or the user wants to override or specify the type manually, the statetype, actiontype, or obstype keywords should be used.

Visualization

Visualization can be accomplished using the render keyword argument. See the documentation of POMDPModelTools.render for more information. An example can be found in example/mountaincarwithvisualization.jl.

diff --git a/dev/search/index.html b/dev/search/index.html index 0b4376d..ff0c49f 100644 --- a/dev/search/index.html +++ b/dev/search/index.html @@ -1,2 +1,2 @@ -Search · QuickPOMDPs

Loading search...

    +Search · QuickPOMDPs

    Loading search...