*AI Agents: The Future of Work or Just Hype?
I’ve been using AI agents for mostly, saving myself from the pitfalls of doom-scrolling, on mind-numbing, soul-sucking, overstimulating apps on the internet and by doing little menial tasks at work. It’s very cool to have a team of LLMs co-working together on the roles you've assigned them, and exactly as requested. It’s a much higher compute for smaller tasks but, I still like the idea of having some local support on taking care of any extra overhead, if needed be. These aren't just a bunch of fancy LLMs working in collaboration instead, are in fact, a different approach to problem solving. In our normal procedural programming era this will need a bit of getting used to. As we shift from writing out instructions to a machine, to managing our newly employed autonomous models that will work and do the task based on their understanding. Give them clear, coherent roles, and they’re accurate. Give them time, and they evolve. Give them tools, and they scale. The pay out is a completely new surface area of problems we can tackle and the solutions we could approach. Agentic frameworks like CrewAI are good, simple, lowering the barrier to complex workflows. Easy enough for a high-schooler to be automating their social media. To a single-person running an agency outcompeting a team, and All the way up to businesses and governments delivering faster, better reliable solutions. The difference isn’t just of productivity, but, of ambition. That being said, a (huge) problem with AI Agent Systems are that they don't exactly work (at least of what I was expecting) To go as far as saying, it can take up complete cognition, seamless task hand-offs, perfect multi-step memory management is not just there yet. We have a long way to go. For instance, while AI agents like CrewAI are great at automating social media posts or scheduling meetings, they still struggle with more complex, multi-step workflows. An AI that is supposed to handle a task from start to finish often hits a roadblock when trying to make decisions based on incomplete data or when task hand-offs between agents aren't smooth. This lack of context retention, coupled with an inability to fully understand nuances, limits more sophisticated use cases, like sales or decision-making in a high-stakes environment. We’re early, but the future of our work leans agentic. It's an interesting era for problem solving as software engineers get more room to focus on strategy, creativity, and focus on better generalized systems instead. In some ways, we’re no longer writing software. We’re engineering management itself. Here's the link to the original post. Let me know your thoughts in the comments. Thanks

I’ve been using AI agents for mostly, saving myself from the pitfalls of doom-scrolling, on mind-numbing, soul-sucking, overstimulating apps on the internet and by doing little menial tasks at work.
It’s very cool to have a team of LLMs co-working together on the roles you've assigned them, and exactly as requested.
It’s a much higher compute for smaller tasks but, I still like the idea of having some local support on taking care of any extra overhead, if needed be.
These aren't just a bunch of fancy LLMs working in collaboration instead, are in fact, a different approach to problem solving.
In our normal procedural programming era this will need a bit of getting used to.
As we shift from writing out instructions to a machine, to managing our newly employed autonomous models that will work and do the task based on their understanding.
Give them clear, coherent roles, and they’re accurate.
Give them time, and they evolve.
Give them tools, and they scale.
The pay out is a completely new surface area of problems we can tackle and the solutions we could approach.
Agentic frameworks like CrewAI are good, simple, lowering the barrier to complex workflows.
Easy enough for a high-schooler to be automating their social media.
To a single-person running an agency outcompeting a team, and
All the way up to businesses and governments delivering faster, better reliable solutions.
The difference isn’t just of productivity, but, of ambition.
That being said, a (huge) problem with AI Agent Systems are that they don't exactly work (at least of what I was expecting)
To go as far as saying, it can take up complete cognition, seamless task hand-offs, perfect multi-step memory management is not just there yet.
We have a long way to go.
For instance, while AI agents like CrewAI are great at automating social media posts or scheduling meetings, they still struggle with more complex, multi-step workflows.
An AI that is supposed to handle a task from start to finish often hits a roadblock when trying to make decisions based on incomplete data or when task hand-offs between agents aren't smooth.
This lack of context retention, coupled with an inability to fully understand nuances, limits more sophisticated use cases, like sales or decision-making in a high-stakes environment.
We’re early, but the future of our work leans agentic.
It's an interesting era for problem solving as software engineers get more room to focus on strategy, creativity, and focus on better generalized systems instead.
In some ways, we’re no longer writing software.
We’re engineering management itself.
Here's the link to the original post.
Let me know your thoughts in the comments. Thanks