Logitech's CEO envisions AI agents deciding alongside directors. It sounds smart as well as dangerous. Who answers when the bot gets it wrong?

October, Washington. Logitech CEO Hanneke Faber stood before the Fortune Most Powerful Women Summit and proposed something that should have triggered more alarm than it did. AI agents in boardrooms. Not as support, but as decision-makers. Sitting around the table with directors, voting on strategy.
The room absorbed this quietly. A few nods. No real pushback. I kept thinking about that muted response.
Picture it: a board confronts a difficult choice. The system analyses the data, models the scenarios, and delivers its recommendation. Everyone stares at the screen. Someone breaks the silence: "The numbers point here." And nobody knows what to say next. Who disagrees with a calculation? Who argues that the data missed the human part?
Logitech already runs some systems. Bots handle meeting notes, summarization, and idea generation. The infrastructure exists. What Faber is really asking is whether those systems should graduate from support staff to decision-makers. Should they have a vote? Should they influence the votes of others? When does a tool become an agent? Start with the obvious problem: AI accountability. Directors operate under a fiduciary duty. You can sue them. They can be removed. If a decision goes catastrophically wrong, someone answers for it. An algorithm answers for nothing. You cannot fire a bot. It cannot resign. It feels no shame, carries no guilt, understands no consequence. Our entire system of corporate governance rests on the assumption that humans make choices and live with those choices.
The liability question keeps lawyers up at night, rightfully so. When a bot recommends laying off a particular division and that division happens to be where most of your female engineers work, is that discrimination? The bot did not intend it. The algorithm optimised it for cost. But it has been harmful anyway. Who bears responsibility? The board that deployed it? The technologists who built it? The company that trained it on historical data that was already skewed? Indian regulators have not yet tackled this directly. They probably would. Soon. Sebi's 2025 AI Governance Framework offers a start, but it does not yet address who answers when algorithms shape board decisions.
Then there is the opacity question, which gets less attention than it deserves. When you ask a human director why she voted against a proposal, she can tell you. It might be a bad reason. It might be based on gut feeling. But you hear it. You can challenge it. You can understand her thinking. Now ask an algorithm why it recommends something. The answer involves thousands of mathematical operations across data that you cannot fully see. Explainability is improving, yes. But perfect transparency remains a fantasy. Directors are trained to make informed decisions. What happens when the information feeding those decisions is fundamentally obscure?
There is also the biased question, and this one strikes one as particularly relevant for India. Algorithms learn from history. If your historical data contains discrimination, the bot will likely reproduce it. Hiring practices with hidden bias patterns? The algorithm learns them. Lending decisions that favored certain groups? It perpetuates them. You end up with discrimination that feels objective, mathematical, and justified by data. That is actually worse than the old kind because nobody questions it. The numbers do not lie, right? Except they do.
Some boards are trying something different, and I think they are onto something. Algorithms summarise so that directors actually read the materials and think about them instead of drowning in pages.
But when should you trust the bot? When should you ignore it? What happens when the data feeding it is incomplete or old? Those are real questions a competent director needs to answer. Some boards have started hiring AI ethics advisors. Smart move. You need people in the room who can explain what just went wrong when the algorithm fails.
There is something else worth saying about speed and governance. Not everything that can be decided quickly should be. The best board decisions often come from disagreement. From someone saying, "Wait, we are missing something." From friction between competing views. From taking time to think about stakeholder impact beyond pure financial optimization. A bot cannot create that friction. It cannot say, "Actually, this decision harms people we should care about." It can only optimise for what you told it to optimise for.
The real divide is simpler than most frameworks suggest. One vision treats the bot as a tool. Information processor. Pattern finder. It surfaces what humans might miss, then exits. Humans decide. The other vision treats it as a participant. A voice in the room with legitimate authority over outcomes. Faber seems open to exploring the second. I think the first is all we can responsibly manage.
Why? Because governance requires someone to answer. When the decision fails, when layoffs hurt communities that matter, when the strategy unravels. Bots cannot face that reckoning. They feel nothing. They learn nothing. They offer no apology. The board still stands there.
That is where this ends. Not with a solution. With a recognition that speed and efficiency are not the real measures of governance. The best boards argue. They listen to people who raise uncomfortable things. They live with their choices. If we hand that off to algorithms, we lose something we need to keep.
(The author is a C-suite+ and startup advisor, and researches and works at the intersection of human-AI collaboration. Views are personal.)