|
||||||||||
PREV NEXT | FRAMES NO FRAMES |
Packages that use State | |
---|---|
edu.iastate.jrelm.core | |
edu.iastate.jrelm.rl |
Uses of State in edu.iastate.jrelm.core |
---|
Classes in edu.iastate.jrelm.core with type parameters of type State | |
---|---|
interface |
StateDomain<I,S extends State>
Representation of an agent's state space. |
Classes in edu.iastate.jrelm.core that implement State | |
---|---|
class |
SimpleState<O>
Simple class that implements the State interface. |
Uses of State in edu.iastate.jrelm.rl |
---|
Classes in edu.iastate.jrelm.rl with type parameters of type State | |
---|---|
interface |
Policy<AI,A extends Action,SI,S extends State>
Interface for building a reinforcement learning policy which is typically a mapping from States to Actions. |
class |
SimplePolicy<AI,A extends Action,SI,S extends State>
A simple implementation of the Policy interface. |
Methods in edu.iastate.jrelm.rl that return types with arguments of type State | |
---|---|
StateDomain<java.lang.Object,State> |
AbstractStatelessPolicy.getStateDomain()
Defined away since this type of policy does not work with States. |
Methods in edu.iastate.jrelm.rl with parameters of type State | |
---|---|
A |
SimplePolicy.generateAction(State<SI> currentState)
Given the current State, choose an Action according to the current probability distribution function. |
double[] |
SimplePolicy.getDistribution(State<SI> aState)
Retrieve the probability distribution function used in selecting Actions from the ActionDomain in the given State. |
void |
SimplePolicy.setDistribution(State<SI> aState,
double[] pdf)
Set the probability distribution function used in selecting Actions from the ActionDomain for the given State. |
|
||||||||||
PREV NEXT | FRAMES NO FRAMES |