Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion PW_FT_classification/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ The CSV file should have the previously mentioned structure. The code will then
If you don't require data splitting, you can set the `split_data` parameter to `False` in the `config.yaml` file.

### Demo data
You can download some example [demo data](https://zenodo.org/records/15376499/files/demo_data_clf.zip?download=1) to test the codebase. Before using the data, make sure to decompress the zip file following the [data directory structure](#data-structure), and check if the `dataset_root` entry in the [config file](./configs/config.yaml) is pointing to the data directory. The testing demo data also has ***an annotation example*** shows how the prefered annotation format looks like.
You can download some example [demo data](https://zenodo.org/records/15376499/files/demo_data_clf.zip?download=1) to test the codebase. Before using the data, make sure to decompress the zip file following the [data directory structure](#data-structure), and check if the `dataset_root` entry in the [config file](./configs/config.yaml) is pointing to the data directory. The testing demo data also has ***an annotation example*** which shows what the preferred annotation format looks like.

## Configuration

Expand Down
4 changes: 2 additions & 2 deletions PW_FT_detection/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,11 +64,11 @@ The `.data/data_example.yaml` file shows an example of the structure.
The .txt files inside each folder of `./data/labels/` must be structured containing each object on a separate line, following the format: class x_center y_center width height. The coordinates for the bounding box should be normalized in the xywh format, with values ranging from 0 to 1.

### Demo data
You can download some example [demo data](https://zenodo.org/records/15376499/files/demo_data_det.zip?download=1) to test the codebase. Before using the data, make sure to decompress the zip file following the [data directory structure](#data-structure), and check if the `data` and `test_data` entries in the [config file](./config.yaml) are pointing to the data directory. The testing demo data also has ***an annotation example*** shows how the prefered annotation format looks like.
You can download some example [demo data](https://zenodo.org/records/15376499/files/demo_data_det.zip?download=1) to test the codebase. Before using the data, make sure to decompress the zip file following the [data directory structure](#data-structure), and check if the `data` and `test_data` entries in the [config file](./config.yaml) are pointing to the data directory. The testing demo data also has ***an annotation example*** which shows what the preferred annotation format looks like.

## Detection models available for Finetuning

Below you find the models that you can use for fine-tuning, along with their respective names to use in the configuration file.
Below you will find the models that you can use for fine-tuning, along with their respective names to use in the configuration file.

|Model|Name|License|
|---|---|---|
Expand Down
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,13 +37,13 @@

## 👋 Welcome to Pytorch-Wildlife

**PyTorch-Wildlife** is an AI platform designed for the AI for Conservation community to create, modify, and share powerful AI conservation models. It allows users to directly load a variety of models including [MegaDetector](https://microsoft.github.io/CameraTraps/megadetector/), [DeepFaune](https://microsoft.github.io/CameraTraps/megadetector/), and [HerdNet](https://github.com/Alexandre-Delplanque/HerdNet) from our ever expanding [model zoo](https://microsoft.github.io/CameraTraps/model_zoo/megadetector/) for both animal detection and classification. In the future, we will also include models that can be used for applications, including underwater images and bioacoustics. We want to provide a unified and straightforward experience for both practicioners and developers in the AI for conservation field. Your engagement with our work is greatly appreciated, and we eagerly await any feedback you may have.
**PyTorch-Wildlife** is an AI platform designed for the AI for Conservation community to create, modify, and share powerful AI conservation models. It allows users to directly load a variety of models including [MegaDetector](https://microsoft.github.io/CameraTraps/megadetector/), [DeepFaune](https://microsoft.github.io/CameraTraps/megadetector/), and [HerdNet](https://github.com/Alexandre-Delplanque/HerdNet) from our ever expanding [model zoo](https://microsoft.github.io/CameraTraps/model_zoo/megadetector/) for both animal detection and classification. In the future, we will also include models that can be used for applications, including underwater images and bioacoustics. We want to provide a unified and straightforward experience for both practitioners and developers in the AI for conservation field. Your engagement with our work is greatly appreciated, and we eagerly await any feedback you may have.

Explore the codebase, functionalities and user interfaces of **Pytorch-Wildlife** through our [documentation](https://microsoft.github.io/CameraTraps/), interactive [HuggingFace web app](https://huggingface.co/spaces/AndresHdzC/pytorch-wildlife) or local [demos and notebooks](./demo).

## 🚀 Quick Start

👇 Here is a quick example on how to perform detection and classification on a single image using `PyTorch-wildlife`
👇 Here is a quick example of how to perform detection and classification on a single image using `PyTorch-wildlife`
```python
import numpy as np
from PytorchWildlife.models import detection as pw_detection
Expand All @@ -68,7 +68,7 @@ pip install PytorchWildlife
Please refer to our [installation guide](https://microsoft.github.io/CameraTraps/installation/) for more installation information.

## 📃 Documentation
Please also go to our newly made dofumentation page for more information: [![](https://img.shields.io/badge/Docs-526CFE?logo=MaterialForMkDocs&logoColor=white)](https://microsoft.github.io/CameraTraps/)
Please also go to our newly made documentation page for more information: [![](https://img.shields.io/badge/Docs-526CFE?logo=MaterialForMkDocs&logoColor=white)](https://microsoft.github.io/CameraTraps/)

## 🖼️ Examples

Expand Down
2 changes: 1 addition & 1 deletion docs/core_features.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ In the provided graph, boxes outlined in red represent elements that will be add


### 🚀 Inaugural Model:
We're kickstarting with YOLO as our first available model, complemented by pre-trained weights from `MegaDetector`. We have `MegaDetectorV5`, which is the same `MegaDetectorV5` model from the previous repository, and many different versions of `MegaDetectorV6` for different usecases.
We're kickstarting with YOLO as our first available model, complemented by pre-trained weights from `MegaDetector`. We have `MegaDetectorV5`, which is the same `MegaDetectorV5` model from the previous repository, and many different versions of `MegaDetectorV6` for different use cases.


### 📚 Expandable Repository:
Expand Down
2 changes: 1 addition & 1 deletion docs/demo_and_ui/ecoassist.md
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
# Pytorch-Wildlife modelsa are available with AddaxAI (formerly EcoAssist)!
# Pytorch-Wildlife models are available with AddaxAI (formerly EcoAssist)!
We are thrilled to announce our collaboration with [AddaxAI](https://addaxdatascience.com/addaxai/#spp-models)---a powerful user interface software that enables users to directly load models from the PyTorch-Wildlife model zoo for image analysis on local computers. With AddaxAI, you can now utilize MegaDetectorV5 and the classification models---AI4GAmazonRainforest and AI4GOpossum---for automatic animal detection and identification, alongside a comprehensive suite of pre- and post-processing tools. This partnership aims to enhance the overall user experience with PyTorch-Wildlife models for a general audience. We will work closely to bring more features together for more efficient and effective wildlife analysis in the future. Please refer to their tutorials on how to use Pytorch-Wildlife models with AddaxAI.
4 changes: 2 additions & 2 deletions docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,12 +18,12 @@


## 👋 Welcome to Pytorch-Wildlife
**PyTorch-Wildlife** is an AI platform designed for the AI for Conservation community to create, modify, and share powerful AI conservation models. It allows users to directly load a variety of models including [MegaDetector](https://microsoft.github.io/CameraTraps/megadetector/), [DeepFaune](https://microsoft.github.io/CameraTraps/megadetector/), and [HerdNet](https://github.com/Alexandre-Delplanque/HerdNet) from our ever expanding [model zoo](model_zoo/megadetector.md) for both animal detection and classification. In the future, we will also include models that can be used for applications, including underwater images and bioacoustics. We want to provide a unified and straightforward experience for both practicioners and developers in the AI for conservation field. Your engagement with our work is greatly appreciated, and we eagerly await any feedback you may have.
**PyTorch-Wildlife** is an AI platform designed for the AI for Conservation community to create, modify, and share powerful AI conservation models. It allows users to directly load a variety of models including [MegaDetector](https://microsoft.github.io/CameraTraps/megadetector/), [DeepFaune](https://microsoft.github.io/CameraTraps/megadetector/), and [HerdNet](https://github.com/Alexandre-Delplanque/HerdNet) from our ever expanding [model zoo](model_zoo/megadetector.md) for both animal detection and classification. In the future, we will also include models that can be used for applications, including underwater images and bioacoustics. We want to provide a unified and straightforward experience for both practitioners and developers in the AI for conservation field. Your engagement with our work is greatly appreciated, and we eagerly await any feedback you may have.


## 🚀 Quick Start

👇 Here is a brief example on how to perform detection and classification on a single image using `PyTorch-wildlife`
👇 Here is a brief example of how to perform detection and classification on a single image using `PyTorch-wildlife`
```python
import numpy as np
from PytorchWildlife.models import detection as pw_detection
Expand Down
2 changes: 1 addition & 1 deletion docs/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ docker run -p 80:80 andreshdz/pytorchwildlife:1.0.2.3 python demo/gradio_demo.py
4. If you want to run any code using the docker image, please use `docker run andreshdz/pytorchwildlife:1.0.2.3` followed by the command that you want to execute.

## Running the Demo
Here is a brief example on how to perform detection and classification on a single image using `PyTorch-wildlife`:
Here is a brief example of how to perform detection and classification on a single image using `PyTorch-wildlife`:

```python
import numpy as np
Expand Down