2025 Year in Review
Overview
I’ve never been particularly attached to the idea of a pre-defined date for reflection. But now that I have a personal website, why not?
Well, January 1st felt like yesterday, and now, we are close to January 1st again. As the saying goes, time flies when you’re having fun, and 2025 certainly delivered plenty of that.
Looking back, it felt like I have accomplished a lot. Yet at the same time, I cannot help but think I could have done more.
What follows is not a victory lap. It’s more of a snapshot of the work, the experiments, the missteps, and the slow accumulation of clarity that came from simply living life.
Professional Highlights
1. Forward Deployed Engagement with an External Team
At the time, I didn’t label it as a Forward-Deployed Engineer (FDE) engagement. But in hindsight, that’s exactly what it was.
I was embedded with an external team. Not just advising from a distance, but working alongside them as a builder and problem-solver. The goal wasn’t to create a perfect solution design document. It was to ship something real.
Huge shoutout to Sravanthi, Paresh, and many others who pushed this use case all the way into production.
2. A Singular, Modular Agent
One recurring problem kept showing up across teams: vast amounts of unstructured data, and very little time to make sense of it.
This led to building a reusable, modular agent designed to convert unstructured inputs such as Excel files, Word documents, and free form text into structured, queryable insights. The emphasis wasn’t just on extraction, but on reusability and composability, allowing the agent to be adapted across domains without being rewritten each time.
Massive thanks to Sravanthi’s team for trusting this as the first real customer, and to Billy for building and iterating on this together. Seeing it move from concept to something people actually used was deeply satisfying.
3. Evaluation, Evaluation, Evaluation
Arguably the least sexy part of building a Generative AI or Agentic AI use case. If Pareto’s principle applied, evaluation would easily take 80 percent of the effort.
But it is also the difference between a cool demo and a system you can actually depend on.
In 2025, this became impossible to ignore. Without proper evaluation, you simply do not know:
- Whether your agent is improving or regressing
- Where errors are coming from
- Whether changes are actually making things better
Evaluation is what makes iteration meaningful. It turns intuition into evidence, and experimentation into engineering. In agentic systems especially, where errors compound across multiple steps, evaluation is not optional. It is survival.
It may not be exciting, but it is foundational. And once you internalize that, you stop treating it as overhead and start treating it as leverage.
Stoked to have worked on an internal Evaluation Playbook that documented our findings and learnings while building real use cases.
Huge shoutout to Bruce and Youssef for working on this together.
4. NTU Agentic AI Workshop
SAP and NTU collaborated on an Agentic AI hackathon, inviting industry practitioners to share hands on perspectives. Not theory and not hype, but what it actually takes to build these systems.
Huge shoutout to my partner in crime Yang Yue for delivering this together. Teaching forced clarity. If you cannot explain it simply, you probably do not understand it well enough yourself.
Also grateful to Sam and Jean for making this possible.
5. Work Trips – Agentic AI Bootcamp
I had the opportunity to support and facilitate the early batches of the Agentic AI Bootcamp within SAP CPIT, working closely with teams over an intensive, hands on week of building and experimentation.
The experience went far beyond teaching concepts. It was about helping teams move from abstract curiosity to concrete execution. From going zero to one with a working proof of concept, to confronting real questions around orchestration, observability, reliability, and governance that only surface when you actually try to build something.
What stood out most was the mindset shift. Watching teams move from asking “Can we do this?” to “How do we do this properly?” was both energizing and deeply rewarding.
Too many people to name here, but I’m grateful to everyone who made this possible.
6. Memory Bank SDK (Ongoing)
Anyone serious about building AI agents eventually runs into this wall. Prompt engineering and context stuffing do not scale.
If you want agents that are adaptive, self improving, and long lived, you need memory. Not as an afterthought, but as a first class primitive.
The Memory Bank SDK explores this idea head on:
- How agents store experiences
- How memory evolves over time
- How relevance, forgetting, and consolidation should work
This is still ongoing, but the core belief is simple. Without memory, agents are reactive. With memory, they become systems that can learn, adapt, and accumulate intelligence over time.
Big shoutout to Yan Ling and Jun Wei for building this together.
7. Innovation, Innovation, Innovation
- Drag & Drop → Exported Code
Low-code and no-code tools promise speed, but they often fall apart when customization and scale are required, especially for AI agents operating in complex enterprise environments.
The idea here was to bridge the gap:
- Keep the intuitive user experience of drag and drop builders
- Preserve the power and flexibility of pro-code systems
The vision was simple but ambitious. Allow users to visually compose agent components and then export them as real, editable LangGraph nodes. Not a dead end abstraction, but a starting point developers could own and extend.
Seeing Google converge toward similar ideas later was reassuring. It validated the direction.
Huge shoutout to Juan for spearheading this solution. Keep an eye out. Who knows, maybe we will open source it.
- MCP & A2A Registry / Gateway
Early in the year, a pattern became obvious. MCP servers, A2A agents, and reusable Agent Skills were going to proliferate and fragment.
Without coordination, each team would build in isolation, creating silos that would be painful to integrate later.
This led to exploring a registry or gateway concept. A unifying layer to discover, manage, and route between MCP servers, A2A agents, and shared Agent Skills. Not just a catalog, but an architectural abstraction to prevent chaos before it becomes entrenched.
It was less about solving today’s problem and more about anticipating tomorrow’s.
Seeing others converge on similar solutions reinforced that this was the right direction.
Projects & Side Work
1. Personal Website
One evening, out of boredom over dinner, I decided to build a personal website. What started as a spontaneous decision quickly turned into something more meaningful. It became a space I truly owned, free from algorithms and timelines, where ideas could live and evolve at their own pace. Over time, it turned into a place to reflect, document learnings, and connect dots across work, experiments, and personal growth. Writing things down forced clarity and made vague thoughts concrete.
2. BTO
Collecting the keys to our new home on 31 December was easily one of the biggest gifts of the year. It marked the end of a long waiting period and the start of a new chapter.
Huge shoutout to my wife for her patience, support, and for carrying so much of the planning and decision making along the way. This would not have happened without her.
Things I Learned
1. Do people really want innovation and change?
“Change is constant.”
“We must innovate to survive.”
“We must move faster.”
These phrases get repeated so often that they start to sound obvious. But in practice, they hide a harder question. Do people actually want what innovation and speed demand?
Innovation is easy to celebrate in theory. Change sounds exciting when it is abstract. But, real change is uncomfortable. It asks people to let go of familiar ways of working, certainty, and control. It asks for trust, tolerance for messiness, and a willingness to operate with incomplete information.
This year, I learned that resistance is not always opposition. Often, it is caution. Sometimes it is fatigue. Other times, it is misalignment between what people say they want and what they are willing to give up.
Speed, especially at an AI driven pace, is not just an operating model. It is a cultural contract. It requires people to accept fewer guarantees, more ambiguity, and faster feedback loops. Not everyone has the appetite for that, and that is okay.
Understanding this made me more thoughtful about how and when to push for change. The real challenge is not forcing innovation, but recognizing whether the environment truly wants it, and if not, deciding what kind of progress is actually possible.
2. Ideas are cheap, alignment is rare
Ideas come easily, and many of them are genuinely good. But I learned that the success of an idea is often less about its quality and more about where it comes from.
Sometimes, if an idea did not originate from the people expected to champion it, it simply did not get supported. Not because it was flawed, but because ownership matters. Ideas are deeply tied to identity, influence, and trust.
More often than not, the challenge was not the idea itself, but the human element around it. Alignment on problem framing, incentives, and authorship plays a bigger role than logic alone.
Recognizing this changed how I approached execution. Instead of pushing harder, I learned to focus on building shared ownership first. When people feel part of the idea, alignment follows, and progress becomes much easier.
Challenges (or better yet, opportunities to grow)
1. Empathy
I learned that being technically right is not the same as being effective. Clear logic and strong solutions do not automatically lead to progress if the people involved feel unheard, overwhelmed, or misaligned.
Empathy matters when working across different roles, priorities, and pressures. Engineers, product managers, leaders, and stakeholders often operate under very different constraints, even when they share the same goals. Taking the time to understand those constraints changes the conversation. It shifts interactions from persuasion to collaboration.
More often than not, understanding where others are coming from unlocked better outcomes than pushing harder or arguing more precisely. Empathy did not dilute the work. It made it more effective.
2. Becoming a better listener
Listening is a skill I continue to work on. This year reminded me that listening is not just about waiting for my turn to speak, but about creating space for others to fully articulate their thoughts.
I noticed that silence can be productive. When I resisted the urge to immediately respond, clarify, or solve, people often arrived at clearer conclusions on their own. Important details surfaced. Hidden concerns became visible.
Clarity rarely emerges from speed alone. It often comes from patience. By slowing down, asking fewer but better questions, and letting conversations breathe, I found that decisions improved and alignment became easier to achieve.
3. Personal growth
Same as 2024, I felt there was never enough time to learn everything I wanted to learn.
There was no shortage of things worth exploring. The challenge was finding the space and energy to go deep instead of skimming the surface.
I also realized that learning requires more than access to information. It needs quiet time, focus, and room to reflect.
This made me more aware of how I spend my attention, and more deliberate about choosing what is worth learning next.
What I’m Looking Forward to in 2026
1. Doing more with less, with the help of AI
Doing more with less is not about working harder or moving faster. It is about reducing friction and focusing energy where it actually matters.
With AI, this means using it deliberately as a force multiplier rather than a crutch. Offloading repetitive, low leverage work. Automating the parts that drain attention. Using AI to explore, summarize, prototype, and iterate faster, so more time is spent on thinking, decision making, and execution that truly requires human judgment.
It also means being more intentional about what not to do. Letting AI help surface signal from noise, challenge assumptions, and provide alternative perspectives. Not to replace thinking, but to sharpen it.
The goal for 2026 is simple. Fewer tools, fewer distractions, clearer priorities. Use AI to amplify focus, not fragment it.
2. Unlearning and relearning
Many assumptions feel stable simply because they have not been questioned in a while. They work, until they do not.
Over time, habits turn into defaults, and defaults quietly become beliefs. This year reminded me that some of the things I rely on are no longer as relevant as they once were.
Unlearning is uncomfortable. It means admitting that past experience does not always translate to future outcomes. It means letting go of certainty, even when it was earned.
Relearning, on the other hand, requires humility and curiosity. Staying open to being wrong, revisiting first principles, and updating mental models as the world changes.
Going into 2026, I want to stay flexible in how I think, quick to question assumptions, and comfortable with learning the same lessons again in a new context.
Final Thoughts
Looking back, 2025 was a year of building, questioning, and recalibrating. None of it happened in isolation.
I am deeply grateful to friends, colleagues, and family members for their patience, understanding, and quiet support. For the conversations that stretched my thinking, the disagreements that sharpened it, and the trust that made exploration possible.
Here’s to carrying the lessons forward with more intention, humility, and clarity.