Skip to content

Store the converted TensorRT engine and context so can be loaded and used faster next time #4597

@leomqyu

Description

@leomqyu

Hi!

May I kindly ask if there are APIs to store the engine and context created by tensorRT in some files (just like how we can store ONNX models in a .onnx file), so that it can be loaded and used faster next time?

Thank you!

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions