10

I am following this guide to set up a self-RAG.

I am not allowed to use OpenAI models at the moment, so I've been using ChatOllama models instead. I want to pipe outputs using the "with_structured_output()" function, with OllamaFunctions instead of ChatOllama. It is demonstrated here.

Essentially here is the code:

from langchain_experimental.llms.ollama_functions import OllamaFunctions


from langchain_core.prompts import PromptTemplate
from langchain_core.pydantic_v1 import BaseModel, Field


# Schema for structured response
class Person(BaseModel):
    name: str = Field(description="The person's name", required=True)
    height: float = Field(description="The person's height", required=True)
    hair_color: str = Field(description="The person's hair color")


# Prompt template
prompt = PromptTemplate.from_template(
    """Alex is 5 feet tall. 
Claudia is 1 feet taller than Alex and jumps higher than him. 
Claudia is a brunette and Alex is blonde.

Human: {question}
AI: """
)

# Chain
llm = OllamaFunctions(model="phi3", format="json", temperature=0)
structured_llm = llm.with_structured_output(Person)
chain = prompt | structured_llm

I get two errors that bring me to a dead end. The first one is:

ValidationError: 1 validation error for OllamaFunctions
__root__
  langchain_community.chat_models.ollama.ChatOllama() got multiple values for keyword argument 'format' (type=type_error)

so I changed llm = OllamaFunctions(model="phi3", format="json", temperature=0) to llm = OllamaFunctions(model="phi3", temperature=0)

and that brings me to the next line at least. Then, the with_structured_output(Person) line fails with error:

File ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/langchain_core/language_models/base.py:208, in BaseLanguageModel.with_structured_output(self, schema, **kwargs)
    204 def with_structured_output(
    205     self, schema: Union[Dict, Type[BaseModel]], **kwargs: Any
    206 ) -> Runnable[LanguageModelInput, Union[Dict, BaseModel]]:
    207     """Implement this if there is a way of steering the model to generate responses that match a given schema."""  # noqa: E501
--> 208     raise NotImplementedError()

NotImplementedError:

And I don't know where to go from here. Anything would help. Thanks!

2 Answers 2

6

Hobakjuk found the issue: pip, github, webdoc versions of ollama_functions are out of sync. which requires a temp workaround until the pypi version is updated.

The Workaround involves:

  1. ctrl+c copy code contents from github ollama_functions.py

  2. make a local ollama_functions.py file, ctrl+v paste code into it

  3. in your python code then import the 'patched' local library by replacing

    from langchain_experimental.llms.ollama_functions import OllamaFunctions
    with
    from ollama_functions import OllamaFunctions

keep track of your code

Sign up to request clarification or add additional context in comments.

1 Comment

Same here, I ctrl+v / ctrl+p ollama_functions.py in my project. Anyway I am still having problems yet with llama-3 and mistra-instr json outputs and the prompt they recommend in the DEFAULT_SYSTEM_TEMPLATE. Anyone is getting this behaviour?
1

I encountered the same issue as you. After checking the code on git and comparing it with the code installed via pip, it seems to be missing a big chunk of the code that supposed to support .with_structured_output(). I replaced the code with the code on git, and it seems to work fine. I believe this issue will be fixed once they update the pip package for langchain_experimental.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.