State Function Approximation: Linear Function

In the previous posts, we use different techniques to build and keep updating State-Action tables. But it is impossible to do the same thing when the number of states and actions get huge. So this post we gonna discuss about using a parameterized function to approximate the value function.

 

Basic Idea of State Function Approximation

Instead of looking up on a State-Action table, we build a black box with weights inside it. Just tell the blackbox whose value functions we want, and then it will calculate and output the value. The weights can be learned by data, which is a typical supervised learning problem.

State Function Approximation: Linear Function

 

The input of the system is actually the feature of state S, so we need to do Feature Engineering (Feature Extraction) to represent the state S. X(s) is the feature vectore of state S.

State Function Approximation: Linear Function

 

Linear Function Approximation with an Oracle

For the black box, we can use different models. In this post, we use Linear Function: inner product of features and weights

 State Function Approximation: Linear Function

 

Assume we are cheatingnow, knowing the true value of the State Value function, then we can do Gradient Descent using Mean Square Error:

State Function Approximation: Linear Function

State Function Approximation: Linear Function

 

and SGD sample the gradient:

State Function Approximation: Linear Function

 

Model-Free Value Function Approximation

Then we go back to reality, realizing the oracle does not help us, which means the only method we can count on is Model-Free algorithm. So we firstly use Monte Carlo, modifying the SGD equation to the following form:

State Function Approximation: Linear Function

 

We can also use TD(0) Learning, the Cost Function is:

State Function Approximation: Linear Function

the gradient is:

State Function Approximation: Linear Function

The algorithm can be described as:

State Function Approximation: Linear Function

 

Model-Free Control Based on State-Action Value Function Approximation

Same as state value function approximation, we extract features from our target problem, building a feature vector:

State Function Approximation: Linear Function

Then the linear estimation for the Q-function is :

State Function Approximation: Linear Function

 

To minimize the MSE cost function, we can get Monte Carlo gradient by taking derivative:

State Function Approximation: Linear Function

SARSA gradient:

State Function Approximation: Linear Function

Q-Learning gradient:

State Function Approximation: Linear Function

References:

https://www.youtube.com/watch?v=buptHUzDKcE

https://www.youtube.com/watch?v=UoPei5o4fps&list=PLqYmG7hTraZDM-OYHWgPebj2MfCFzFObQ&index=6

State Function Approximation: Linear Function

上一篇:【转帖】安卓的Bionic 简介


下一篇:kwuliu:ionic混合APP开发环境搭建