0

I'm trying to use LangChain’s structured output feature with a Gemini model, however, whenever I try to run the chain, I get the error:

ValueError: unknown enum label "any"

Here’s the code I have used:

from pydantic import BaseModel, Field

model = ChatGoogleGenerativeAI(
    model="gemini-2.0-flash",
    temperature=0,
)

class Topic_lists(BaseModel):   
    topic_lists: list[str] = Field(
        ..., description="list of suggested topics",
        min_length=5, max_length=5
    )

structured_model = model.with_structured_output(Topic_lists)

system_message = SystemMessagePromptTemplate.from_template("You are a {system_role}")

frst_human_message = HumanMessagePromptTemplate.from_template(
    "List 5 possible research topics. "
    "Be creative. "
    "Output only the topics separated by a comma ','"
)

first_prompt = ChatPromptTemplate.from_messages([system_message, frst_human_message])

chain_1 = (
    {"system_role": lambda x: x["system_role"]}
    | first_prompt
    | structured_model
    | {"topics": lambda x: x.content}
)

# Invoke the chain
topics = chain_1.invoke({"system_role": "student researcher"})

I was strictly following the Langchain documentation, and I have tried many ways to resolve this. I am thinking if it is a dependency version problem...

1
  • do you have a full stack trace of the error? It's difficult to pinpoint anything with just a single error Commented Oct 27 at 5:11

1 Answer 1

0

Using your code as it is, does not throw the error ValueError: unknown enum label "any" for me.

Here's are the packages and its version I used:

langchain-google-genai>=2.1.12
langchain>=0.3.27

Here's the full code:

from pydantic import BaseModel, Field
from langchain.prompts import SystemMessagePromptTemplate, HumanMessagePromptTemplate, ChatPromptTemplate
from langchain_google_genai import ChatGoogleGenerativeAI
from dotenv import load_dotenv

load_dotenv()

model = ChatGoogleGenerativeAI(
    model="gemini-2.0-flash",
    temperature=0,
)

class TopicLists(BaseModel):
    topic_lists: list[str] = Field(
        description="list of suggested topics",
        min_length=5, max_length=5
    )

structured_model = model.with_structured_output(TopicLists)

system_message = SystemMessagePromptTemplate.from_template("You are a {system_role}")

frst_human_message = HumanMessagePromptTemplate.from_template(
    "List 5 possible research topics. "
    "Be creative. "
    "Output only the topics separated by a comma ','"
)

first_prompt = ChatPromptTemplate.from_messages([system_message, frst_human_message])

chain_1 = (
    {"system_role": lambda x: x["system_role"]}
    | first_prompt
    | structured_model
    | {"topics": lambda x: x.topic_lists}
)

# Invoke the chain
topics = chain_1.invoke({"system_role": "student researcher"})
print(topics)

output:

{
  'topics': [
    'The role of AI in art creation',
    'The ethics of gene editing',
    'The impact of social media on mental health',
    'The future of space exploration',
    'The effects of climate change on biodiversity'
  ]
}
Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.