Inside SAS’s Push to Make AI Agents Accountable

At SAS Innovate 2025 in Orlando, SAS unveiled its roadmap for agentic AI, making the case for its role as a company that has been quietly working on intelligent decision automation long before AI agents became a trending topic. The latest enhancements to its SAS Viya platform aim to help enterprises design, deploy, and govern AI agents that combine automation with ethical oversight. While many tech vendors are racing to show off how many AI agents they can spin up at once, SAS CTO Bryan Harris dismisses such counts as a vanity metric. What really counts, he said, is not the quantity of agents but the quality of their output. “The metric that matters,” Harris told AIwire, “is what kind of decisions you're running in the enterprise, and what's the value of those decisions to the business?” How SAS Defines Agentic AI Agentic AI, as defined by SAS, is not simply about automating tasks, but is about building systems that make decisions with a blend of reasoning, analytics, and embedded governance. The SAS Viya platform supports this vision by integrating deterministic models, machine learning algorithms, and large language models into a unified orchestration layer. The goal is to enable enterprises to deploy intelligent agents that are capable of acting autonomously when appropriate but also provide transparency and human oversight when the stakes are high. Udo Sglavo, VP of applied AI and modeling R&D, described SAS’s agentic push as a natural evolution from the company's consulting-driven past. “We’ve been doing this kind of modeling exercise for a long time, but typically it was a one-to-one relationship. You came to me with a problem, I’d send in consultants, they’d solve it, off we go,” Sglavo told AIwire. “Now the idea is, if you’ve done this ten, a hundred times for the same kind of challenge, why not take all this IP and put it into a software product?” This shift from services to scalable solutions, according to Sglavo, has been accelerated by growing comfort with LLMs. "There’s been a mindset change. Customers are now more willing to adopt models they didn’t build themselves," he said. That shift has cleared the way for wider adoption of prepackaged models and agent-based systems. The Limits of Large Language Models Both Harris and Sglavo emphasized that LLMs, despite their widespread appeal, are only one piece of a much larger enterprise AI picture. At SAS, LLMs are viewed as valuable but limited components that need to be paired with other forms of intelligence to drive reliable, repeatable decisions. The SAS executives explained that unlike deterministic models, which return consistent outputs for the same inputs every time, LLMs can be unpredictable. “If I run a deterministic model with the same conditions a thousand times, I’ll get the same answer a thousand times,” Harris said. “That’s not the case for large language models.” This variability makes them ill-suited for high-stakes applications where auditability and control are critical. Instead, SAS uses LLMs where they excel: speeding up repetitive tasks and generating prototype solutions that humans or more deterministic systems can later refine. One example of repetitive task speedup is in schema mapping, a task that often requires domain knowledge and painstaking manual review. With metadata as input, LLMs can rapidly suggest column matches and generate code, reducing a multi-week effort to minutes. However, because accuracy can vary, SAS integrates confidence scoring and always includes a human-in-the-loop to validate results. In more advanced use cases, SAS has also implemented techniques that allow LLMs to iterate on their own outputs by revisiting earlier steps, rethinking mappings, and challenging initial assumptions. This iterative self-checking behavior is a key design principle in SAS's agentic AI framework, where agents do not just accept the first answer but reason through problems dynamically. Giving Agents a Goal The key distinction SAS draws between traditional automation and agentic AI lies in goal orientation. Rather than simply executing a set of predefined instructions, agents are designed to pursue a defined goal and adjust their behavior dynamically until that goal is met. This capability reflects a shift in how organizations are thinking about AI, driven in part by the disillusionment that followed early enthusiasm around LLMs. Sglavo explained in an interview how many business leaders initially hoped that generative models would offer a kind of universal intelligence where you could drop in a business problem and get out a solution. Instead, LLMs proved best suited for narrow tasks like text analysis. The emergence of agentic AI, he said, represents an effort to combine the statistical, machine learning, and optimization techniques developed over decades with the newer capabilities of LLMs and retrieval-augmented knowledge systems. In this framework, agents become orchestrators of those tools. Rather than being explicitly

May 13, 2025 - 17:32
 0
Inside SAS’s Push to Make AI Agents Accountable

At SAS Innovate 2025 in Orlando, SAS unveiled its roadmap for agentic AI, making the case for its role as a company that has been quietly working on intelligent decision automation long before AI agents became a trending topic. The latest enhancements to its SAS Viya platform aim to help enterprises design, deploy, and govern AI agents that combine automation with ethical oversight.

While many tech vendors are racing to show off how many AI agents they can spin up at once, SAS CTO Bryan Harris dismisses such counts as a vanity metric. What really counts, he said, is not the quantity of agents but the quality of their output.

“The metric that matters,” Harris told AIwire, “is what kind of decisions you're running in the enterprise, and what's the value of those decisions to the business?”

How SAS Defines Agentic AI

Agentic AI, as defined by SAS, is not simply about automating tasks, but is about building systems that make decisions with a blend of reasoning, analytics, and embedded governance. The SAS Viya platform supports this vision by integrating deterministic models, machine learning algorithms, and large language models into a unified orchestration layer. The goal is to enable enterprises to deploy intelligent agents that are capable of acting autonomously when appropriate but also provide transparency and human oversight when the stakes are high.

SAS Innovate 2025. (Source: The Author)

Udo Sglavo, VP of applied AI and modeling R&D, described SAS’s agentic push as a natural evolution from the company's consulting-driven past. “We’ve been doing this kind of modeling exercise for a long time, but typically it was a one-to-one relationship. You came to me with a problem, I’d send in consultants, they’d solve it, off we go,” Sglavo told AIwire. “Now the idea is, if you’ve done this ten, a hundred times for the same kind of challenge, why not take all this IP and put it into a software product?”

This shift from services to scalable solutions, according to Sglavo, has been accelerated by growing comfort with LLMs. "There’s been a mindset change. Customers are now more willing to adopt models they didn’t build themselves," he said. That shift has cleared the way for wider adoption of prepackaged models and agent-based systems.

The Limits of Large Language Models

Both Harris and Sglavo emphasized that LLMs, despite their widespread appeal, are only one piece of a much larger enterprise AI picture. At SAS, LLMs are viewed as valuable but limited components that need to be paired with other forms of intelligence to drive reliable, repeatable decisions.

The SAS executives explained that unlike deterministic models, which return consistent outputs for the same inputs every time, LLMs can be unpredictable. “If I run a deterministic model with the same conditions a thousand times, I’ll get the same answer a thousand times,” Harris said. “That’s not the case for large language models.” This variability makes them ill-suited for high-stakes applications where auditability and control are critical. Instead, SAS uses LLMs where they excel: speeding up repetitive tasks and generating prototype solutions that humans or more deterministic systems can later refine.

One example of repetitive task speedup is in schema mapping, a task that often requires domain knowledge and painstaking manual review. With metadata as input, LLMs can rapidly suggest column matches and generate code, reducing a multi-week effort to minutes. However, because accuracy can vary, SAS integrates confidence scoring and always includes a human-in-the-loop to validate results.

In more advanced use cases, SAS has also implemented techniques that allow LLMs to iterate on their own outputs by revisiting earlier steps, rethinking mappings, and challenging initial assumptions. This iterative self-checking behavior is a key design principle in SAS's agentic AI framework, where agents do not just accept the first answer but reason through problems dynamically.

Giving Agents a Goal

The key distinction SAS draws between traditional automation and agentic AI lies in goal orientation. Rather than simply executing a set of predefined instructions, agents are designed to pursue a defined goal and adjust their behavior dynamically until that goal is met. This capability reflects a shift in how organizations are thinking about AI, driven in part by the disillusionment that followed early enthusiasm around LLMs.

Udo Sglavo, SAS VP of Applied AI and Modeling R&D

Sglavo explained in an interview how many business leaders initially hoped that generative models would offer a kind of universal intelligence where you could drop in a business problem and get out a solution. Instead, LLMs proved best suited for narrow tasks like text analysis. The emergence of agentic AI, he said, represents an effort to combine the statistical, machine learning, and optimization techniques developed over decades with the newer capabilities of LLMs and retrieval-augmented knowledge systems.

In this framework, agents become orchestrators of those tools. Rather than being explicitly programmed for each step, they are handed an objective, such as increasing event registration numbers, and are then tasked with deciding how to achieve it. For example, an agent could generate emails, identify potential recipients using a statistical model, and continue refining its campaign until a defined target is reached.

This kind of agent, Sglavo noted, is well-suited for low-risk scenarios like marketing campaigns. But when the stakes are higher, such as decisions about credit approvals or healthcare outcomes, the approach must shift. Human-in-the-loop oversight becomes essential, and clear governance frameworks must define where autonomy ends and accountability begins.

Governance and Trust at the Core

The SAS executives stressed that agentic AI cannot be responsibly deployed without built-in governance. SAS Viya includes mechanisms to detect bias, evaluate fairness, and provide full transparency into how decisions are made. "We give our customers insight into when a model is deficient," said Harris. " And then they can make the choice to improve the data or improve the model.”

(Source: Suri_Studio/Shutterstock)

Governance also includes controls over how much autonomy agents are granted. This is especially critical in high-risk domains like finance, healthcare, and public services. SAS includes guardrails that ensure transparency and lets customers fine-tune how much autonomy agents are allowed.

SAS also emphasizes the importance of localized knowledge sources. Rather than relying on internet-sourced information, agents can be configured to draw only from enterprise-specific data repositories. Retrieval-augmented generation (RAG) setups enable agents to access internal knowledge bases to make contextual decisions without compromising security or accuracy.

A Marketplace of Agents Is Coming

Looking ahead, Sglavo expects agentic AI to evolve into an open marketplace, where enterprises can mix and match specialized agents from different vendors. In that future, decision-making will be distributed across interconnected agent networks that communicate and collaborate using shared protocols like MCP or Google's open source A2A. This vision also redefines how enterprises think about deployment. Rather than shipping massive monolithic AI systems, companies will deploy nimble agents, each with a narrow focus but deep specialization.

“This will become the marketplace of agents,” Sglavo said. “Because while we may say we have the best supply chain optimization agent, another vendor may claim the same thing. And then it becomes a question of trust, pricing, track record. Have they done this before? Are they just a startup that’s good at tech but hasn’t worked with actual customers?”

Sglavo added that enterprises will want the flexibility to select and combine agents based on their needs. “You’ll say, I want to use this agent, this one, and this one—and just bring them all together.”

A Future Built on Accountable AI

Bryan Harris, CTO at SAS

As generative AI continues to capture headlines, SAS is placing its bet on decision-first AI. For companies in regulated sectors where the cost of a bad decision can be measured in lives or billions, the company argues, transparency and trust must come before experimentation or scale.

As the enterprise AI conversation shifts from experimental prototypes to more practical, accountable systems, SAS is staking out a space where trust, interoperability, and decision quality come first.

"You can't prevent irresponsibility," said Harris. "But we can give you the tools that allow you to make the right decision."