Skip to content

Commit c43c774

Browse files
committed
Update documentation
1 parent e3ca02b commit c43c774

File tree

2 files changed

+27
-15
lines changed

2 files changed

+27
-15
lines changed

master/python/models/model.html

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -507,9 +507,11 @@
507507
<dt class="field-odd">Parameters<span class="colon">:</span></dt>
508508
<dd class="field-odd"><ul class="simple">
509509
<li><p><strong>model</strong> (<em>str</em>) – model name from OpenVINO Model Zoo, path to model, OVMS URL</p></li>
510-
<li><p><strong>configuration</strong> (<code class="xref py py-obj docutils literal notranslate"><span class="pre">dict</span></code>, optional) – dictionary of model config with model properties, for example confidence_threshold, labels</p></li>
510+
<li><p><strong>configuration</strong> (<code class="xref py py-obj docutils literal notranslate"><span class="pre">dict</span></code>, optional) – dictionary of model config with model properties, for example
511+
confidence_threshold, labels</p></li>
511512
<li><p><strong>model_type</strong> (<code class="xref py py-obj docutils literal notranslate"><span class="pre">str</span></code>, optional) – name of model wrapper to create (e.g. “ssd”)</p></li>
512-
<li><p><strong>preload</strong> (<code class="xref py py-obj docutils literal notranslate"><span class="pre">bool</span></code>, optional) – whether to call load_model(). Can be set to false to reshape model before loading</p></li>
513+
<li><p><strong>preload</strong> (<code class="xref py py-obj docutils literal notranslate"><span class="pre">bool</span></code>, optional) – whether to call load_model(). Can be set to false to reshape model before
514+
loading.</p></li>
513515
<li><p><strong>core</strong> (<em>optional</em>) – openvino.Core instance, passed to OpenvinoAdapter</p></li>
514516
<li><p><strong>weights_path</strong> (<code class="xref py py-obj docutils literal notranslate"><span class="pre">str</span></code>, optional) – path to .bin file with model weights</p></li>
515517
<li><p><strong>adaptor_parameters</strong> (<code class="xref py py-obj docutils literal notranslate"><span class="pre">dict</span></code>, optional) – parameters of ModelAdaptor</p></li>
@@ -519,7 +521,8 @@
519521
<li><p><strong>max_num_requests</strong> (<code class="xref py py-obj docutils literal notranslate"><span class="pre">int</span></code>, optional) – number of infer requests for asynchronous inference</p></li>
520522
<li><p><strong>precision</strong> (<code class="xref py py-obj docutils literal notranslate"><span class="pre">str</span></code>, optional) – inference precision (e.g. “FP16”)</p></li>
521523
<li><p><strong>download_dir</strong> (<code class="xref py py-obj docutils literal notranslate"><span class="pre">str</span></code>, optional) – directory where to store downloaded models</p></li>
522-
<li><p><strong>cache_dir</strong> (<code class="xref py py-obj docutils literal notranslate"><span class="pre">str</span></code>, optional) – directory where to store compiled models to reduce the load time before the inference</p></li>
524+
<li><p><strong>cache_dir</strong> (<code class="xref py py-obj docutils literal notranslate"><span class="pre">str</span></code>, optional) – directory where to store compiled models to reduce the load time before
525+
the inference.</p></li>
523526
</ul>
524527
</dd>
525528
<dt class="field-even">Returns<span class="colon">:</span></dt>

master/python/models/visual_prompting.html

Lines changed: 21 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -403,7 +403,8 @@
403403
<li><p><strong>encoder_model</strong> (<a class="reference internal" href="sam_models.html#model_api.models.sam_models.SAMImageEncoder" title="model_api.models.sam_models.SAMImageEncoder"><em>SAMImageEncoder</em></a>) – initialized decoder wrapper</p></li>
404404
<li><p><strong>decoder_model</strong> (<a class="reference internal" href="sam_models.html#model_api.models.sam_models.SAMDecoder" title="model_api.models.sam_models.SAMDecoder"><em>SAMDecoder</em></a>) – initialized encoder wrapper</p></li>
405405
<li><p><strong>reference_features</strong> (<a class="reference internal" href="#model_api.models.visual_prompting.VisualPromptingFeatures" title="model_api.models.visual_prompting.VisualPromptingFeatures"><em>VisualPromptingFeatures</em></a><em> | </em><em>None</em><em>, </em><em>optional</em>) – Previously generated reference features.
406-
Once the features are passed, one can skip learn() method, and start predicting masks right away. Defaults to None.</p></li>
406+
Once the features are passed, one can skip learn() method, and start predicting masks right away.
407+
Defaults to None.</p></li>
407408
<li><p><strong>threshold</strong> (<em>float</em><em>, </em><em>optional</em>) – Threshold to match vs reference features on infer(). Greater value means a</p></li>
408409
<li><p><strong>0.65.</strong> (<em>stricter matching. Defaults to</em>)</p></li>
409410
</ul>
@@ -441,18 +442,22 @@
441442
<dt class="field-odd">Parameters<span class="colon">:</span></dt>
442443
<dd class="field-odd"><ul class="simple">
443444
<li><p><strong>image</strong> (<em>np.ndarray</em>) – HWC-shaped image</p></li>
444-
<li><p><strong>reference_features</strong> (<a class="reference internal" href="#model_api.models.visual_prompting.VisualPromptingFeatures" title="model_api.models.visual_prompting.VisualPromptingFeatures"><em>VisualPromptingFeatures</em></a><em> | </em><em>None</em><em>, </em><em>optional</em>) – Reference features object obtained during previous learn() calls.</p></li>
445-
<li><p><strong>passed</strong> (<em>If not</em>)</p></li>
446-
<li><p><strong>used</strong> (<em>object internal state is</em>)</p></li>
447-
<li><p><strong>None.</strong> (<em>which reflects the last learn</em><em>(</em><em>) </em><em>call. Defaults to</em>)</p></li>
448-
<li><p><strong>apply_masks_refinement</strong> (<em>bool</em><em>, </em><em>optional</em>) – Flag controlling additional refinement stage on inference. Once enabled, decoder will</p></li>
449-
<li><p><strong>True.</strong> (<em>be launched 2 extra times to refine the masks obtained with the first decoder call. Defaults to</em>)</p></li>
445+
<li><p><strong>reference_features</strong> (<a class="reference internal" href="#model_api.models.visual_prompting.VisualPromptingFeatures" title="model_api.models.visual_prompting.VisualPromptingFeatures"><em>VisualPromptingFeatures</em></a><em> | </em><em>None</em><em>, </em><em>optional</em>) – Reference features object obtained during
446+
previous learn() calls. If not passed, object internal state is used, which reflects the last learn()
447+
call. Defaults to None.</p></li>
448+
<li><p><strong>apply_masks_refinement</strong> (<em>bool</em><em>, </em><em>optional</em>) – Flag controlling additional refinement stage on inference.</p></li>
449+
<li><p><strong>enabled</strong> (<em>Once</em>)</p></li>
450+
<li><p><strong>decoder</strong> (<em>decoder will be launched 2 extra times to refine the masks obtained with the first</em>)</p></li>
451+
<li><p><strong>True.</strong> (<em>call. Defaults to</em>)</p></li>
450452
</ul>
451453
</dd>
452454
<dt class="field-even">Returns<span class="colon">:</span></dt>
453-
<dd class="field-even"><p>Mapping label -&gt; predicted mask. Each mask object contains a list of binary masks, and a list of
454-
related prompts. Each binary mask corresponds to one prompt point. Class mask can be obtained by applying OR operation to all
455-
mask corresponding to one label.</p>
455+
<dd class="field-even"><p><dl class="simple">
456+
<dt>Mapping label -&gt; predicted mask. Each mask object contains a list of binary masks,</dt><dd><p>and a list of related prompts. Each binary mask corresponds to one prompt point. Class mask can be
457+
obtained by applying OR operation to all mask corresponding to one label.</p>
458+
</dd>
459+
</dl>
460+
</p>
456461
</dd>
457462
<dt class="field-odd">Return type<span class="colon">:</span></dt>
458463
<dd class="field-odd"><p><a class="reference internal" href="utils.html#model_api.models.utils.ZSLVisualPromptingResult" title="model_api.models.utils.ZSLVisualPromptingResult">ZSLVisualPromptingResult</a></p>
@@ -482,8 +487,12 @@
482487
</ul>
483488
</dd>
484489
<dt class="field-even">Returns<span class="colon">:</span></dt>
485-
<dd class="field-even"><p>return values are the updated VPT reference features and reference masks.
486-
The shape of the reference mask is N_labels x H x W, where H and W are the same as in the input image.</p>
490+
<dd class="field-even"><p><dl class="simple">
491+
<dt>return values are the updated VPT reference features and</dt><dd><p>reference masks.</p>
492+
</dd>
493+
</dl>
494+
<p>The shape of the reference mask is N_labels x H x W, where H and W are the same as in the input image.</p>
495+
</p>
487496
</dd>
488497
<dt class="field-odd">Return type<span class="colon">:</span></dt>
489498
<dd class="field-odd"><p>tuple[<a class="reference internal" href="#model_api.models.visual_prompting.VisualPromptingFeatures" title="model_api.models.visual_prompting.VisualPromptingFeatures">VisualPromptingFeatures</a>, np.ndarray]</p>

0 commit comments

Comments
 (0)