Animatica is my diploma project that generates image animation from videos in real time or not using neural networks (FOMM). It automates animation creation with image generation, image-to-video conversion, and post-processing.
python3 -m venv venv
source venv/bin/activate
python -m venv venv
source venv/Scripts/activate
cd app
pip install -r requirements.txt
# env/ml_engine.env
ML_ENGINE_KEY=your-secret-key
API_MODE=local
LOG_LEVEL=debug
Download pretrained models and save them in folder named /app/data/checkpoints/.
Checkpoints can be found under following link: google-drive or yandex-disk.
unzip checkpoints.zip
rm checkpoints.zip
Unzip checkpoints.zip using unzipping software like 7zip.
Go to app/src directory:
cd app/src
-
Run the project from Jupyter Notebook named test.ipynb.
-
Run the project using CLI (Command Line Interface).
Examples:
python run_model.py --mode train --configs config.yaml
python run_model.py --mode reconstruction --configs config.yaml --checkpoint path/to/ckpt
python run_model.py --mode animate --configs config.yaml --checkpoint path/to/ckpt
Go to app directory:
cd app
uvicorn src.run_server:app --host 0.0.0.0 --port 90 --reload
Available endpoints:
- http://localhost:90/docs (Swagger docs).
- http://localhost:90/api/fomm/video (Image animation).
Go to app directory:
cd app
docker build . --tag animatica-ml-engine
docker run --name ml-engine -p 9080:90 animatica-ml-engine
After, you can use the following endpoints:
- http://localhost:9080/docs (Swagger docs).
- http://localhost:9080/api/fomm/video (Image animation).
You can use Docker-compose with Animatica-backend.
My ready-made image: kefirchk/animatica-ml-engine:latest
- Add pre-commit.
- Add Docker.
- Add GitHub Actions.
- Optimize ML Model for CPU using.
- Update the project with async operations.