Nice work!
I have several questions on the very interesting rl-in-real setting (e.g. section 5.4~5.5):
-
reward: in the paper, it mentions that binary 0/1 reward is used. My question is how is the reward function implemented? Is it provided by a human?
-
the second question is that there seems to be an human operator to reset the scene in-between episodes. At least from the video, it seems that the initial position of the objects are almost identical (see below) across episodes.
I'm curious about the reason for this, and also the change of the performance
as we derivate more and more from this almost identical setting.
Nice work!
I have several questions on the very interesting rl-in-real setting (e.g. section 5.4~5.5):
reward: in the paper, it mentions that binary 0/1 reward is used. My question is how is the reward function implemented? Is it provided by a human?
the second question is that there seems to be an human operator to reset the scene in-between episodes. At least from the video, it seems that the initial position of the objects are almost identical (see below) across episodes.
I'm curious about the reason for this, and also the change of the performance
as we derivate more and more from this almost identical setting.