Quick Start
In this tutorial, you'll build a custom workflow with two prompts. By the end, you'll have an interactive playground to run and evaluate your chain of prompts.
The complete code for this tutorial is available here.
What you will build
A chain-of-prompts application that:
- Takes a blog post as input
- Summarizes it (first prompt)
- Writes a tweet from the summary (second prompt)
1. Create the application
We will build an app that summarizes a blog post and generates a tweet. The highlighted lines show Agenta integration.
from openai import OpenAI
from pydantic import BaseModel, Field
import agenta as ag
from agenta.sdk.types import PromptTemplate, Message, ModelConfig
from opentelemetry.instrumentation.openai import OpenAIInstrumentor
ag.init()
client = OpenAI()
OpenAIInstrumentor().instrument()
class Config(BaseModel):
prompt1: PromptTemplate = Field(
default=PromptTemplate(
messages=[
Message(role="system", content="You summarize blog posts concisely."),
Message(role="user", content="Summarize this:\n\n{{blog_post}}")
],
template_format="curly",
input_keys=["blog_post"],
llm_config=ModelConfig(model="gpt-4o-mini", temperature=0.7)
)
)
prompt2: PromptTemplate = Field(
default=PromptTemplate(
messages=[
Message(role="user", content="Write a tweet based on this:\n\n{{summary}}")
],
template_format="curly",
input_keys=["summary"],
llm_config=ModelConfig(model="gpt-4o-mini", temperature=0.9)
)
)
@ag.route("/", config_schema=Config)
@ag.instrument()
async def generate(blog_post: str) -> str:
config = ag.ConfigManager.get_from_route(schema=Config)
# Step 1: Summarize
formatted1 = config.prompt1.format(blog_post=blog_post)
response1 = client.chat.completions.create(**formatted1.to_openai_kwargs())
summary = response1.choices[0].message.content
# Step 2: Write tweet
formatted2 = config.prompt2.format(summary=summary)
response2 = client.chat.completions.create(**formatted2.to_openai_kwargs())
return response2.choices[0].message.content
Let's explore each section:
Initialization
import agenta as ag
ag.init()
Initialize Agenta using ag.init(). This sets up the connection to Agenta's backend.
Configuration with PromptTemplate
class Config(BaseModel):
prompt1: PromptTemplate = Field(default=PromptTemplate(...))
PromptTemplate bundles everything needed for an LLM call: messages, model, temperature, and other settings. Agenta renders a rich editor for each PromptTemplate field in the playground.
Use {{variable}} syntax with template_format="curly". The input_keys list tells Agenta which variables to expect.
Entry point
@ag.route("/", config_schema=Config)
async def generate(blog_post: str) -> str:
The @ag.route decorator exposes your function to Agenta. The config_schema parameter tells Agenta what configuration to show in the playground.
Accessing configuration
config = ag.ConfigManager.get_from_route(schema=Config)
This retrieves the configuration from the current request. When you edit prompts in the playground, the new values arrive here.