Skip to content

To ensure the copy of gradients in chapter-11.ipynb/A3C #25

@MarginalCentrality

Description

@MarginalCentrality

In the optimize_model function of A3C, gradients from local model are copyed to shared model when “ shared_param.grad is None”. However, it seems that shared_param.grad would never be none after the first copy operation. Maybe we need to use "self.shared_value_optimizer.zero_grad(set_to_none=True)" to replace "self.shared_value_optimizer.zero_grad()". The same change should also be applied to “self.shared_policy_optimizer”.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions