Thought 20250508 LangChain Eco System

Today was my second time participating in a LangChain talk. The first was at my previous company; this time, it’s an ACM‑invited session. Like the first LangChain presentation, this one feels more like commercial marketing than informative. Forty minutes in, it still had no substance - just a few well‑known problems and the claim that “LangGraph” can solve them. Key components of LangChain: Paid APIs (PDF ingestion and other utilities) Text splitters, output parsers, document loaders, vector stores Prompts, example selectors, tools, models They certainly have some good features, like PDF digestion, but it’s usually hidden behind their platform’s paywall, which, frankly, doesn’t feel very impressive—a vendor‑lock‑in risk always kicks in when your tech stack relies heavily on it. And its design isn’t even that good. The goal of attending this session was to study how they do things. Now that Divooka is maturing and we provide more native AI service integration, I’m starting to appreciate even more a node‑native compositional model like ComfyUI for chaining AI services together. One key (and probably only) takeaway was that agents drive impact. Agentic applications indeed form interesting use cases if proper pipelines are built around them. Let’s take a look at the first problem they proposed concerning real‑time use, which later they suggested is not a real problem but more related to perceived processing speed: Challenge Proposed Solution Multiple LLM calls needed Parallelize steps where possible Non‑LLM steps before or after (RAG, database queries, tool calls) Parallelize or batch where feasible Keeping the user engaged while waiting Stream intermediate outputs (optional) The problem with the proposed solution is that it’s very specific to real‑time applications, which I believe may not be the best use of agent capabilities. LangGraph has some very interesting features, though: Long-running workflow management with streaming and persistence, allowing human-in-the-loop Integration with LangSmith Architecture: a main graph plus per-agent subgraphs and “multi-agent architectural patterns” First-class support for token- and event-level streaming (Someone mentioned it’s similar to Amazon Bedrock Flows) On the other hand, quite a few questions remain unanswered: How does LangChain/LangGraph work—what’s the full architecture? Are we going to rely on LangChain to provide all the infrastructure as a hosted service, or do we need to self-host? How are states stored, and how do we configure the runtime model? How do we integrate/configure human-in-the-loop? They did mention a textbook by Mayo and Nuno—Learning LangChain: Building AI and LLM Applications with LangChain and LangGraph.

May 8, 2025 - 20:36
 0
Thought 20250508 LangChain Eco System

Today was my second time participating in a LangChain talk. The first was at my previous company; this time, it’s an ACM‑invited session. Like the first LangChain presentation, this one feels more like commercial marketing than informative. Forty minutes in, it still had no substance - just a few well‑known problems and the claim that “LangGraph” can solve them.

Key components of LangChain:

  • Paid APIs (PDF ingestion and other utilities)
  • Text splitters, output parsers, document loaders, vector stores
  • Prompts, example selectors, tools, models

They certainly have some good features, like PDF digestion, but it’s usually hidden behind their platform’s paywall, which, frankly, doesn’t feel very impressive—a vendor‑lock‑in risk always kicks in when your tech stack relies heavily on it. And its design isn’t even that good.

The goal of attending this session was to study how they do things. Now that Divooka is maturing and we provide more native AI service integration, I’m starting to appreciate even more a node‑native compositional model like ComfyUI for chaining AI services together. One key (and probably only) takeaway was that agents drive impact. Agentic applications indeed form interesting use cases if proper pipelines are built around them.

Let’s take a look at the first problem they proposed concerning real‑time use, which later they suggested is not a real problem but more related to perceived processing speed:

Challenge Proposed Solution
Multiple LLM calls needed Parallelize steps where possible
Non‑LLM steps before or after (RAG, database queries, tool calls) Parallelize or batch where feasible
Keeping the user engaged while waiting Stream intermediate outputs (optional)

The problem with the proposed solution is that it’s very specific to real‑time applications, which I believe may not be the best use of agent capabilities.

LangGraph has some very interesting features, though:

  • Long-running workflow management with streaming and persistence, allowing human-in-the-loop
  • Integration with LangSmith
  • Architecture: a main graph plus per-agent subgraphs and “multi-agent architectural patterns”
  • First-class support for token- and event-level streaming
  • (Someone mentioned it’s similar to Amazon Bedrock Flows)

On the other hand, quite a few questions remain unanswered:

  • How does LangChain/LangGraph work—what’s the full architecture?
  • Are we going to rely on LangChain to provide all the infrastructure as a hosted service, or do we need to self-host?
  • How are states stored, and how do we configure the runtime model?
  • How do we integrate/configure human-in-the-loop?

They did mention a textbook by Mayo and Nuno—Learning LangChain: Building AI and LLM Applications with LangChain and LangGraph.