Why the Future of AI Software Relies on "Governance-by-Architecture": Insights by Balbodh Chauhan

Why the Future of AI Software Relies on Governance-by-Architecture: Insights by Balbodh Chauhan
X

Explore how “governance-by-architecture” is shaping the future of AI software. Insights by Balbodh Chauhan highlight the role of built-in governance frameworks in ensuring transparency, compliance, and responsible AI development.

The Hans India is vigorously covering artificial intelligence as India transitions from experimentation to mainstream economic and governance influence. This is obvious in discussions on regulatory frameworks and trust in AI systems, as well as the AI India Summit 2026. Initiatives such as the government's quest for comprehensive AI roadmaps across ministries indicate a watershed moment in India's approach to responsible AI development and administration. As a result, we are highlighting voices that connect global enterprise product leaders to responsible AI governance, moving beyond theoretical debate. We asked Balbodh Chauhan, a Senior Product Manager at Smartsheet who oversees AI governance-centric product experiences, to discuss how organisations around the world are developing solutions that prioritise trust, auditability, and security as AI spreads across industries.

1. You began your career by building sensor-driven systems in oilfields and now leading AI governance at Smartsheet. So, what thread connects those two worlds for you?

Ans: At first, oilfield sensors and enterprise SaaS products may seem different. However, they are similar in that they require translating large data streams into actionable insights and recommendations for humans.

In oilfields, we dealt with real-time sensor data from harsh environments where every detail mattered. The challenge was ensuring that this data was accurate and interpretable. The reason is that if the system misinterprets data from wells or equipment, the consequences can be very costly for the organisation. From this perspective, AI products are facing a similar challenge today. For instance, AI systems rely on algorithms to inform decisions across business, finance, and product development. Hence, my professional approach has always been to design systems in which data pipelines and decision layers are transparent and observable so that I can rely on them and avoid major mistakes.

2. You now lead AI governance and audit experiences. What mental models from building sensor-to-decision systems at Schlumberger still shape how you build trust-centric SaaS products today?

Ans: There are three of them - observability, safety, and the possibility of human override.

Let’s start with observability. In sensor systems, every piece of data needs to be traceable. If something goes wrong, you will understand what exactly happened by analysing data traces. Now, I have the same attitude towards AI systems. It is crucial for me that inputs and outputs are inspectable.

Secondly, I design a fail-safe architecture. For instance, in industrial settings, systems are built with the assumption that something will eventually fail. And even if this failure happens, the system can recover without causing cascading issues. In my opinion, AI systems should follow the same principle.

Lastly, human override is no less crucial. You cannot trust AI as much as you trust humans, which is why even the most advanced automation systems require human oversight, especially when decisions directly affect data trust and revenue. The goal, as outlined earlier, is to avoid costly mistakes in any form, and human override is necessary to achieve that.

3. At Smartsheet, you're driving AI governance across product areas. When you say “trust-centric experiences,” what does that actually mean in product terms? What metrics tell you that trust is increasing?

Ans: As mentioned in the previous answer, trust-centric product design means that AI is observable and predictable for its users. By ‘users’, I emphasise engineers and product specialists, who use this tool at a deep architectural level. This can include audit trails of AI actions, visibility into data lineage, and explainable outputs rather than black boxes. From a product metrics perspective, trust manifests in measurable signals, such as fewer compliance-related support tickets and improved user retention following the introduction of AI features.

4. You conceptualised an AI-powered audit solution using natural language queries. How do you combine innovation speed with compliance and cybersecurity constraints?

Ans: I noticed that speed and compliance are often seen as opposing, yet in enterprise software, they complement each other. The approach I use is called governance-by-architecture. It implies designing guardrails directly in the system. According to the approach, the design must include role-based access to audit capabilities, logging of every interaction, security reviews embedded in development cycles, and permission-scoped data access for queries. Indeed, natural language interfaces introduce new usability features, but they also create ambiguity in how these queries are interpreted. That is why the system must map natural language to controlled query layers.

5. From a product and systems perspective, what is required to build a highly automated payment infrastructure?

Ans: Automated payment infrastructure is a mammoth system that sits at the intersection of finance, product, and distributed systems engineering. It requires several foundational layers to work seamlessly. First, engineers need to establish a unified identity and payment permission model across all services. Every transaction must be tied to a verified entity, whether it is a user, an organisation, or an internal service, and the system must define what actions each entity is authorised to perform. Without this layer, it becomes extremely difficult to enforce security or maintain consistency in billing, subscriptions, and financial reporting.

Secondly, there should be event-driven data flows that automatically trigger downstream processes, such as billing updates or service access. Automation is also critical for handling exceptions. Systems need to detect anomalies, evaluate risk thresholds, and resolve most issues automatically without manual intervention. The goal is to minimise human involvement in routine transactions while ensuring that the infrastructure remains secure and scalable.

6. Where do you believe AI governance and enterprise audit tooling are heading in the next 3–5 years? What are most SaaS companies underestimating?

Ans: Most SaaS companies will still likely treat governance as a compliance requirement rather than a core product capability. Nevertheless, I think that over the next few years, governance tooling will become embedded directly into AI platforms, including automated risk scoring and data lineage tracking. Regarding what companies are underestimating, this is definitely how quickly regulatory expectations for AI transparency will evolve. Therefore, it will be critical for them to design governance layers early on.

7. You are an IIT and ISB alum from top schools in India, then worked at Amazon and McKinsey, and now at Smartsheet. What was that one point that changed your trajectory?

Ans: The turning point for me was the realisation that most impactful work happens at the intersection of technology and business. Looking back, I focused solely on technology early in my career. However, looking at my peers' experience and consulting with more experienced industry specialists taught me that strategic thinking matters a lot as well.

8. If someone wants to operate at the intersection of finance, engineering, and AI as you do, what should they deliberately build early in their career?

Ans: I believe they should invest in their technical literacy by understanding how systems work, in economic thinking to understand how technical decisions impact business outcomes, and in data intuition to navigate signals in complex datasets. All of these skills are where many of the most important product decisions happen.

Next Story
Share it