Breaking Change: Making `tasks_prompts_chain` Agnostic and More Powerful
Welcome, developers and AI enthusiasts! Today, I'm excited to share a major update to the tasks_prompts_chain library. This update introduces a breaking change that not only makes the library completely agnostic regarding the LLM SDK you use but also enhances how responses are handled, giving you the flexibility to choose between streaming and batch processing. In this tutorial, we'll cover: Why this update is essential: Achieving SDK-agnosticism and flexible response handling. How to initialize your chain with multiple LLM configurations. How to define prompts with explicit LLM selection. How to stream responses in real time or wait for complete outputs to access each prompt's result individually. Preparing for future integrations with Tasksforge.ai. Let's dive in! Why the Update? SDK-Agnosticism and Flexible Response Handling When I first built tasks_prompts_chain, it was designed to depend solely on openai.AsyncOpenAI. However, as more projects adopted the library, it became clear that a one-size-fits-all approach wasn’t enough. Projects often have varying needs and may prefer different LLM SDKs. SDK-Agnosticism With this update, tasks_prompts_chain is now agnostic—supporting not only AsyncOpenAI but also AsyncCerebras and AsyncAnthropic. This means you’re no longer locked into a single LLM SDK. You pass your chosen LLM SDK to TasksPromptsChain() during initialization, giving you the freedom to select the best tool for your project. And don’t worry—more SDKs will be supported in the future! Flexible Response Handling In addition to SDK flexibility, the update introduces a versatile response-handling mechanism. You can now choose between streaming responses in real time or waiting until the entire chain has finished processing, after which you can access each prompt's output individually. This flexibility lets you inspect and debug outputs at your own pace. Step 1: Initializing the Chain with LLM Configurations The first step is to set up your chain by passing a list of LLM configurations. This centralizes the SDK setup, ensuring your project isn’t forced to stick with one implementation. Below is an example configuration: from tasks_prompts_chain import TasksPromptsChain from tasks_prompts_chain.llm import AsyncOpenAI, AsyncAnthropic, AsyncCerebras llm_configs = [ { "llm_id": "gpt", # Unique identifier for this LLM "llm_class": AsyncOpenAI, # Your chosen LLM SDK class "model_options": { "model": "gpt-4o", "api_key": "your-openai-api-key", "temperature": 0.1, "max_tokens": 4120, } }, { "llm_id": "claude", # Unique identifier for this LLM "llm_class": AsyncAnthropic, # Your chosen LLM SDK class "model_options": { "model": "claude-3-sonnet-20240229", "api_key": "your-anthropic-api-key", "temperature": 0.1, "max_tokens": 8192, } }, { "llm_id": "llama", # Unique identifier for this LLM "llm_class": AsyncCerebras, # Your chosen LLM SDK class "model_options": { "model": "llama-3.3-70b", "api_key": "your-cerebras-api-key", "base_url": "https://api.cerebras.ai/v1", "temperature": 0.1, "max_tokens": 4120, } } ] # Initialize your chain with the LLM configurations chain = TasksPromptsChain(llm_configs=llm_configs, "my system Prompt", final_result_placeholder="project_analysis", system_apply_to_all_prompts=False) Step 2: Defining Prompts with Explicit LLM Selection Once your chain is initialized, the next step is to define your prompts. Each prompt can now specify which LLM to use by including an llm_id attribute. This explicit routing lets you control the execution flow of your prompt chain. For example: prompts = [ { "name": "generate_idea", "prompt": "Give me a cool project idea about {{ topic }}", "output_format": "MARKDOWN", "output_placeholder": "elaboration", "llm_id": "claude" # Use the 'claude' LLM for this prompt }, { "name": "generate_description", "prompt": "Write a detailed README for: {{ elaboration }}", "output_format": "MARKDOWN", "output_placeholder": "doc_readme", "llm_id": "gpt" # Use the 'gpt' LLM for this task } ] By including the llm_id in each prompt, you gain granular control over which language model processes each task Flexible Response Handling: Streaming vs. Batch Processing One of the most exciting improvements is how the library handles responses. You now have two modes available: 1. Streaming Response If you prefer real-time feedback, you can stream responses as they come in. For example, printing each response chunk: async for response in chain.execute_chain(prompts): print(response) 2. Batch Respo

Welcome, developers and AI enthusiasts! Today, I'm excited to share a major update to the tasks_prompts_chain library. This update introduces a breaking change that not only makes the library completely agnostic regarding the LLM SDK you use but also enhances how responses are handled, giving you the flexibility to choose between streaming and batch processing.
In this tutorial, we'll cover:
- Why this update is essential: Achieving SDK-agnosticism and flexible response handling.
- How to initialize your chain with multiple LLM configurations.
- How to define prompts with explicit LLM selection.
- How to stream responses in real time or wait for complete outputs to access each prompt's result individually.
- Preparing for future integrations with Tasksforge.ai.
Let's dive in!
Why the Update? SDK-Agnosticism and Flexible Response Handling
When I first built tasks_prompts_chain
, it was designed to depend solely on openai.AsyncOpenAI
. However, as more projects adopted the library, it became clear that a one-size-fits-all approach wasn’t enough. Projects often have varying needs and may prefer different LLM SDKs.
SDK-Agnosticism
With this update, tasks_prompts_chain
is now agnostic—supporting not only AsyncOpenAI
but also AsyncCerebras
and AsyncAnthropic
. This means you’re no longer locked into a single LLM SDK. You pass your chosen LLM SDK to TasksPromptsChain()
during initialization, giving you the freedom to select the best tool for your project. And don’t worry—more SDKs will be supported in the future!
Flexible Response Handling
In addition to SDK flexibility, the update introduces a versatile response-handling mechanism. You can now choose between streaming responses in real time or waiting until the entire chain has finished processing, after which you can access each prompt's output individually. This flexibility lets you inspect and debug outputs at your own pace.
Step 1: Initializing the Chain with LLM Configurations
The first step is to set up your chain by passing a list of LLM configurations. This centralizes the SDK setup, ensuring your project isn’t forced to stick with one implementation.
Below is an example configuration:
from tasks_prompts_chain import TasksPromptsChain
from tasks_prompts_chain.llm import AsyncOpenAI, AsyncAnthropic, AsyncCerebras
llm_configs = [
{
"llm_id": "gpt", # Unique identifier for this LLM
"llm_class": AsyncOpenAI, # Your chosen LLM SDK class
"model_options": {
"model": "gpt-4o",
"api_key": "your-openai-api-key",
"temperature": 0.1,
"max_tokens": 4120,
}
},
{
"llm_id": "claude", # Unique identifier for this LLM
"llm_class": AsyncAnthropic, # Your chosen LLM SDK class
"model_options": {
"model": "claude-3-sonnet-20240229",
"api_key": "your-anthropic-api-key",
"temperature": 0.1,
"max_tokens": 8192,
}
},
{
"llm_id": "llama", # Unique identifier for this LLM
"llm_class": AsyncCerebras, # Your chosen LLM SDK class
"model_options": {
"model": "llama-3.3-70b",
"api_key": "your-cerebras-api-key",
"base_url": "https://api.cerebras.ai/v1",
"temperature": 0.1,
"max_tokens": 4120,
}
}
]
# Initialize your chain with the LLM configurations
chain = TasksPromptsChain(llm_configs=llm_configs, "my system Prompt", final_result_placeholder="project_analysis", system_apply_to_all_prompts=False)
Step 2: Defining Prompts with Explicit LLM Selection
Once your chain is initialized, the next step is to define your prompts. Each prompt can now specify which LLM to use by including an llm_id attribute. This explicit routing lets you control the execution flow of your prompt chain.
For example:
prompts = [
{
"name": "generate_idea",
"prompt": "Give me a cool project idea about {{ topic }}",
"output_format": "MARKDOWN",
"output_placeholder": "elaboration",
"llm_id": "claude" # Use the 'claude' LLM for this prompt
},
{
"name": "generate_description",
"prompt": "Write a detailed README for: {{ elaboration }}",
"output_format": "MARKDOWN",
"output_placeholder": "doc_readme",
"llm_id": "gpt" # Use the 'gpt' LLM for this task
}
]
By including the llm_id in each prompt, you gain granular control over which language model processes each task
Flexible Response Handling: Streaming vs. Batch Processing
One of the most exciting improvements is how the library handles responses. You now have two modes available:
1. Streaming Response
If you prefer real-time feedback, you can stream responses as they come in. For example, printing each response chunk:
async for response in chain.execute_chain(prompts):
print(response)
2. Batch Response Processing
Alternatively, you might prefer to wait until all responses are received. When the chain emits the special marker Done, you can then access each prompt’s output individually. Here's an example:
async for response in chain.execute_chain(prompts):
# Optionally, print interim chunks:
# print(response)
# When the final output is ready:
if response == "Done ":
print("\nFinal Results:\n")
for placeholder in [
"elaboration",
"doc_readme"
]:
print(f"\n{placeholder}:")
print(chain.get_result(placeholder))
This approach offers you full control: either enjoy real-time outputs or wait for complete responses to process further.
Preparing for the Future: Integration with Tasksforge.ai
These improvements in tasks_prompts_chain are just the beginning. With the library now SDK-agnostic and featuring flexible response handling, it’s perfectly poised for deeper integration with Tasksforge.ai. Imagine a platform where you can effortlessly swap LLMs, inspect individual step outputs, and build robust AI workflows—all while maintaining complete freedom over your chosen SDK.
Getting Started
eady to harness the power of a more flexible and powerful prompt chain? Follow these steps:
Update your initialization: Pass your preferred LLM SDKs via the llm_configs.
Define your prompts: Use the llm_id attribute to route tasks to your chosen LLM.
Choose your response mode: Stream in real time or process outputs in batch—whichever suits your workflow.
For more details and advanced examples, check out the updated documentation: