Skip to content

mobile-sam mnn inference performance #1

@MaybeShewill-CV

Description

@MaybeShewill-CV

Wonder if mobile-sam can reach the inference performance mentioned in origin paper using mnn backend. As mentioned in origin paper encoding part cost 8ms, decoding cost 4ms.

I've tested it with interpreter and session instead of module api but can not reach 8ms when applying encoding transform:)

Metadata

Metadata

Assignees

Labels

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions