-
Notifications
You must be signed in to change notification settings - Fork 19.9k
Description
Checked other resources
- This is a bug, not a usage question.
- I added a clear and descriptive title that summarizes this issue.
- I used the GitHub search to find a similar question and didn't find it.
- I am sure that this is a bug in LangChain rather than my code.
- The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
- This is not related to the langchain-community package.
- I posted a self-contained, minimal, reproducible example. A maintainer can copy it and run it AS IS.
Package (Required)
- langchain
- langchain-openai
- langchain-anthropic
- langchain-classic
- langchain-core
- langchain-cli
- langchain-model-profiles
- langchain-tests
- langchain-text-splitters
- langchain-chroma
- langchain-deepseek
- langchain-exa
- langchain-fireworks
- langchain-groq
- langchain-huggingface
- langchain-mistralai
- langchain-nomic
- langchain-ollama
- langchain-perplexity
- langchain-prompty
- langchain-qdrant
- langchain-xai
- Other / not sure / general
Example Code (Python)
from langchain_huggingface import ChatHuggingFace, HuggingFacePipeline
from langchain.agents import create_agent
import os
from dotenv import load_dotenv
load_dotenv()
MODEL_PATH = "models/Qwen3-4B-Thinking-2507"
llm = HuggingFacePipeline.from_model_id(
model_id=MODEL_PATH,
task="text-generation",
pipeline_kwargs=dict(max_new_tokens=2048, do_sample=False),
model_kwargs={"device_map": "auto"}
)
chat_model = ChatHuggingFace(llm=llm)
agent = create_agent(model=chat_model)
# results = agent.invoke({"messages": [{"role": "user", "content": "what is the weather in shenzhen"}]})
# for mes in results["messages"]:
# mes.pretty_print()Error Message and Stack Trace (if applicable)
File ".../langchain_huggingface/chat_models/huggingface.py", line 813, in _astream
async for chunk in await self.llm.async_client.chat_completion(
AttributeError: 'HuggingFacePipeline' object has no attribute 'async_client'
And the studio interface shows "AttributeError: 'HuggingFacePipeline' object has no attribute 'async_client'"Description
I'm trying to deploy a LangChain agent using a local Hugging Face model via HuggingFacePipeline and ChatHuggingFace. The setup works in synchronous mode, but fails in LangGraph Studio with the error:
AttributeError: 'HuggingFacePipeline' object has no attribute 'async_client'
Is there any way to work around this issue?
Alternatively, is there another approach to deploy a local model (without using Ollama) as an agent?
Or could recommend any other agent frameworks that work well with locally hosted models?(I need a interactive interface like studio)
System Info
System Information
OS: Linux
OS Version: #149~20.04.1-Ubuntu SMP Wed Apr 16 08:29:56 UTC 2025
Python Version: 3.11.14 (main, Oct 21 2025, 18:31:21) [GCC 11.2.0]
Package Information
langchain_core: 1.1.0
langchain: 1.1.0
langsmith: 0.4.49
langchain_huggingface: 1.1.0
langchain_text_splitters: 0.3.11
langgraph_api: 0.5.27
langgraph_cli: 0.4.7
langgraph_runtime_inmem: 0.19.0
langgraph_sdk: 0.2.10
Optional packages not installed
langserve
Other Dependencies
blockbuster: 1.5.25
click: 8.3.1
cloudpickle: 3.1.2
cryptography: 44.0.3
grpcio: 1.76.0
grpcio-tools: 1.75.1
httpx: 0.28.1
huggingface-hub: 0.36.0
jsonpatch: 1.33
jsonschema-rs: 0.29.1
langgraph: 1.0.4
langgraph-checkpoint: 3.0.1
opentelemetry-api: 1.38.0
opentelemetry-exporter-otlp-proto-http: 1.38.0
opentelemetry-sdk: 1.38.0
orjson: 3.11.4
packaging: 25.0
protobuf: 6.33.1
pydantic: 2.12.5
pyjwt: 2.10.1
python-dotenv: 1.2.1
pyyaml: 6.0.3
requests: 2.32.5
requests-toolbelt: 1.0.0
sentence-transformers: 5.1.2
sse-starlette: 2.1.3
starlette: 0.50.0
structlog: 25.5.0
tenacity: 9.1.2
tokenizers: 0.22.1
transformers: 4.57.3
truststore: 0.10.4
typing-extensions: 4.15.0
uvicorn: 0.38.0
watchfiles: 1.1.1
zstandard: 0.25.0