It would be nice if we could support models like llama-3.2-1b-instruct. Or at least provide some docs on how to add it ourselves. It looks like we can compile from source using the scripts/build-runtime.sh and add additional supported models, but I can't seem to figure out which mlc-llm tag to check out to make the script work. when i use v0.19.0 the script seems to miss the TVM FFI headers. thanks for this library though, it works great with the models it supports out of the box!