top of page

Search Results

129 results found with an empty search

  • WOMEN x AI Demo Day Recap: What Founders Are Building Right Now

    Female Founders Presenting at WxAI and Convex Demo Day Last night in San Francisco, we hosted the WOMEN x AI Demo Day in partnership with Convex . Seven founders got up and showed what they’ve actually built. Products in motion. Early traction. Real feedback. This is where you go beyond the headlines and the hype and start to see what’s real in AI. 📸 A big thank you to Natasha Renée, of Featured Founders who helped us capture the evening. Check out the Demo Day gallery ! What We Saw A clear shift is happening. AI is moving from experimentation to infrastructure. And the founders building right now are solving specific, high-value problems. Key Trends from the Demo Day 1. AI is filling real labor gaps One company focused on accounting workflows is targeting mid-market and enterprise teams preparing to go public. This isn’t about replacing accountants. It’s about addressing a growing shortage and reducing audit risk. 2. Vertical AI is driving adoption The strongest companies were deeply focused: Accounting and financial compliance Consulting workflows and data analysis Fundraising and investor matching These are not generic tools. They are built for specific buyers with clear ROI. 3. AI is reshaping services Intriq AI is building autonomous agents for the consulting industry, targeting the manual data work that happens before insights are generated. Early traction includes: Paid users Billions of rows processed Pilot programs with major firms The takeaway: services industries are being rebuilt with AI at the core. 4. Agentic systems are becoming real products Several founders are building beyond single tools into systems: Event coordination platforms for conferences Creator assistants for social commerce Fundraising platforms evaluating founder readiness These systems are designed to act, not just assist. 5. Distribution is a differentiator The companies gaining traction had clear distribution: Partnerships with conferences and ecosystems Integration into existing platforms Community-driven growth Product alone is not enough. 6. AI is expanding access SprintFit is focused on improving access to capital for women and underrepresented founders. Instead of broad marketplaces, they’re building: Founder readiness scoring Investor matching Thesis-driven connections The focus is on who gets seen , not just efficiency. Meet the Female Founders Who Demoed We heard from seven founders building across AI: Chandrika Maheshwari  — Quivly AI AI for accounting workflows and audit risk reduction for enterprise finance teams Erin Kim  — Mesh (YC W25) Turns spreadsheets into agentic systems, helping teams unify fragmented financial and operational data across tools, teams, and formats Meg McWilliams  — Mixies Agentic platform for event coordination across conferences and IRL communities Jessica Liew  — Intriq AI Autonomous AI agents for consulting workflows, focused on data collection and analysis Kruti Baires  — SprintFit AI-powered fundraising platform supporting women and underrepresented founders Dhivya Vijayakumar  — Velvee Your AI shopping assistant, the missing layer between inspiration and purchase Anjali Kakkadpatel  — Aligned Rewards AI-powered platform that unifies goal tracking, performance management, and rewards into a single system to reduce tool fragmentation and improve organizational alignment You can see the slides from all of the demos here: You can access the google slides  directly. Why This Matters AI is moving into high-stakes workflows: Finance Consulting Hiring Fundraising The strongest companies are: Specific Outcome-driven Embedded in real workflows This is where AI adoption is happening. Final Thought If AI is part of your product, your workflow, or your investment strategy, this is the level to pay attention to. Not headlines. Not tools. What’s actually being built by founders. AI Titans Tools Council We shared our current Battle Tested AI Tools from the AI Titians Tools Council at the end of the demos. If you're looking to adopt new AI tools this is a great place to start because we walk you through what works and what doesn't work for each tool. About WOMEN x AI WOMEN x AI is a global community helping women lead, build, and shape the future of artificial intelligence. Sign up for our free WxAI membership to be kept in the loop for speaking opportunities, events and more

  • 12 AI Governance Questions Every Board Should Ask

    Artificial intelligence is reshaping enterprise strategy, risk exposure, and operational systems. As a result, boards are increasingly responsible for structured AI governance oversight. Boards do not need to understand model architecture. They do need to ask better governance questions. The quality of AI oversight is often determined not by technical depth, but by the discipline of inquiry. These AI governance questions for boards are grounded in fiduciary responsibility and structured oversight. For a broader framework on how directors can structure AI oversight, see our AI Governance for Boards: Frameworks, Risk, and Oversight  guide. Strategic Alignment Questions How does AI support our long-term enterprise strategy? AI deployment should connect directly to competitive positioning, margin structure, customer value, or operational resilience. Where is AI materially influencing revenue, cost structure, or valuation? If AI influences enterprise value, it belongs in structured oversight. Are AI initiatives aligned with our defined risk tolerance? Innovation without risk framing creates exposure. Organizational Accountability Questions Who owns AI strategy at the executive level? Shared responsibility without defined ownership weakens accountability. Who is responsible for monitoring AI risk and reporting to the board? Oversight requires clarity. Are AI responsibilities reflected in leadership incentives or performance metrics? If accountability is not embedded operationally, governance is symbolic. Boardroom Practice Questions Which committee oversees AI strategy and risk? AI oversight should not remain informal. How often does AI appear in recurring reporting? One-time updates are education. Recurring cadence is governance. One-time updates are education. Recurring cadence is governance. Many boards are still figuring out how to structure this discussion. Directors can begin by defining where AI appears in strategy reviews, enterprise risk discussions, or committee oversight. See How to Integrate AI Into the Board Agenda  for a practical guide on embedding AI into recurring board discussions. Do we have defined escalation thresholds for material AI exposure? If escalation decisions are discretionary, oversight may be inconsistent. Risk and Compliance Questions How is AI risk integrated into enterprise risk management? AI risk should not exist outside structured ERM processes. Do we have visibility into third-party AI vendor exposure? Many AI systems are embedded within vendor platforms. External exposure is internal risk. Many organizations underestimate vendor AI exposure. Boards should ensure oversight extends to third-party systems. Learn more in How Boards Should Oversee Third-Party AI Vendors . How are regulatory developments and compliance obligations monitored? The AI regulatory landscape is evolving. Monitoring should be assigned and structured. Stakeholder Impact Questions How do we monitor bias, fairness, and explainability in material AI systems? Ethical commitments must translate into measurable oversight. What documentation exists to demonstrate structured AI oversight? In regulatory inquiry, litigation, or transaction diligence, documentation determines defensibility. Why These Questions Matter AI governance questions for boards are not designed to slow innovation. They are designed to surface exposure. Directors who consistently ask these questions: Clarify accountability Strengthen committee alignment Increase visibility Improve defensibility The absence of these questions often signals governance immaturity. Within a Structured Governance Framework These questions align directly with structured AI oversight across the core dimensions outlined in our AI Governance for Boards framework and the Board Director’s AI Governance Compass : Individual readiness Boardroom practices Organizational accountability Stakeholder impact Boards that adopt a disciplined inquiry model reduce ambiguity and strengthen fiduciary oversight. AI governance is about asking the right questions consistently. Final Perspective AI will continue to evolve. The board’s responsibility is not to predict every outcome. It is to ensure that oversight evolves with deployment. Strong governance begins with disciplined inquiry. Boards that ask better questions build better structures.

  • AI Governance vs AI Ethics: What Directors Need to Know

    Artificial intelligence discussions often blur the line between AI governance and AI ethics. For boards of directors, that distinction is not semantic. It is structural. AI ethics defines principles. AI governance defines accountability. Understanding the difference between AI governance and AI ethics is essential for directors who carry fiduciary responsibility. What Is AI Ethics? AI ethics refers to the principles that guide how artificial intelligence should be designed and deployed. Ethical AI conversations typically focus on: Fairness and bias Transparency and explainability Privacy and consent Societal impact Responsible innovation Many organizations publish AI ethics statements. They create advisory councils. They articulate values. Those steps matter. But principles alone do not create oversight. An AI ethics policy does not reduce fiduciary exposure if it is not backed by governance structures. What Is AI Board Governance? AI governance refers to the board-level structures, oversight processes, and accountability mechanisms that ensure AI strategy and AI risk are aligned with enterprise value. AI governance answers operationally uncomfortable questions: Who is accountable for AI outcomes? How is AI risk monitored and escalated? What reporting reaches the board? When does AI deployment require formal review? How are third-party AI vendors evaluated? AI governance is not about debating principles. It is about defining responsibility. For directors, governance sits at the level of fiduciary duty and enterprise risk oversight. The Core Difference: Principles vs Accountability The difference between AI governance and AI ethics becomes clear under pressure. AI ethics defines what the organization believes is responsible. AI governance determines what happens when those beliefs are tested. When product deadlines compress.When regulatory scrutiny increases.When a model produces unexpected outcomes.When a customer challenges an AI-driven decision. Ethics lives in policy documents. Governance lives in reporting structures, committee mandates, risk thresholds, and escalation pathways. Boards do not manage model design. They are responsible for ensuring that ethical commitments survive operational reality. Why the Distinction Matters for Boards A common governance mistake is assuming that approving an AI ethics policy satisfies board oversight. It does not. Without structured AI governance: Ethical commitments may not be operationalized. Risk reporting may be inconsistent. Vendor exposure may go unexamined. Accountability may remain diffuse. Undefined AI risk is ungoverned AI risk. Ethics informs values. Governance enforces accountability. Where AI Ethics Fits Within AI Governance AI ethics is not separate from governance. It is embedded within it. In a structured AI governance framework, ethics shows up through: Bias monitoring requirements Transparency and explainability standards Defined AI risk tolerance levels Vendor due diligence protocols Board-level reporting cadence Ethical principles must be translated into measurable oversight. If there is no reporting, there is no governance.If there is no accountability, there is no governance. The Board’s Role in AI Governance The board’s role is not to become an ethics committee. The board’s role is to ensure that management: Embeds ethical principles into operational controls Integrates AI risk into enterprise risk management Defines ownership for AI decision-making Escalates material AI exposure appropriately Aligns AI deployment with long-term enterprise value Directors must ask, "How are ethical commitments monitored? What metrics indicate ethical risk? Who owns remediation if AI systems cause harm?" These are governance questions. The Governance Reality AI is not just another IT initiative. It shapes pricing decisions, hiring processes, customer experiences, supply chains, and capital allocation. In some cases, it influences outcomes in ways that are not easily explainable. That shifts AI from a technical discussion to a governance discipline. AI ethics defines intention. AI governance defines control. For boards, the distinction determines whether oversight is symbolic or structural. For a structured board-level methodology that integrates strategy, risk, accountability, and stakeholder impact into AI oversight, see our guide to AI Governance for Boards , which introduces the AI Governance Compass framework. Final Distinction AI ethics asks: What is responsible? AI governance asks: Who is accountable, how is it monitored, and what happens when it fails?

  • AI and Board Committees: Where Should Oversight Sit?

    As artificial intelligence reshapes enterprise risk and strategy, boards face a practical governance question: Which committee oversees AI? AI and board committees are now directly connected. Oversight cannot remain informal, and responsibility cannot remain ambiguous. If AI influences enterprise value, committee ownership must be defined. Why Committee Placement Matters Board committees are not administrative structures. They are accountability mechanisms. Committee placement determines: How often AI is reviewed What information reaches directors Whether risk or strategy dominates discussion How escalation occurs How defensible oversight appears in documentation Without defined committee ownership, AI governance remains diffuse. Diffuse oversight increases exposure. Common Committee Approaches to AI Oversight There is no single model. However, most boards place AI oversight within one of four structures. 1. Audit Committee Some boards assign AI oversight to the Audit Committee when AI exposure is tied to: Regulatory compliance Financial reporting Internal controls Data governance This approach emphasizes risk containment. Risk: Strategic AI initiatives may receive insufficient attention. 2. Risk Committee Boards with dedicated Risk Committees often embed AI into enterprise risk review. This works when AI exposure affects: Operational resilience Regulatory obligations Vendor dependencies Reputational exposure Risk: AI innovation and competitive positioning may be siloed. 3. Technology or Innovation Committee Boards in technology-forward companies may house AI oversight within Technology or Innovation Committees. This model supports: Product strategy alignment AI roadmap oversight Competitive positioning Risk: Compliance and fiduciary risk discussions may be underweighted. 4. Hybrid Committee Model Many mature boards adopt a hybrid structure: Full board reviews AI strategy Risk or Audit Committees oversee compliance and exposure Technology Committees review technical direction This reflects the cross-functional nature of AI. Risk: Without clear coordination mechanisms, accountability may fragment. The Governance Mistake Boards Make A common mistake is assigning AI oversight to a committee without updating that committee’s charter. If AI responsibility is informal, accountability is ambiguous. Committee charters should explicitly reference: AI strategy oversight AI risk monitoring Third-party AI vendor exposure Escalation thresholds If AI exposure is material, the governance structure should reflect it. Need help getting AI on to the board agenda ? We have a guide to help. How to Decide Where AI Oversight Should Sit Boards should consider: Degree of AI Deployment Is AI central to the product or peripheral to operations? Regulatory Exposure Does AI deployment increase compliance risk? Financial Materiality Does AI influence revenue, margin, or valuation? Vendor Dependence Are third-party AI systems embedded into critical workflows? The more material AI becomes, the more visible oversight must be. Oversight depth must scale with exposure. Coordination Between Committees AI rarely fits neatly into one domain. For example: A product AI feature may create compliance risk. A vendor AI integration may affect cybersecurity posture. An AI pricing system may influence revenue recognition. Committee structures must allow cross-committee communication. Without coordination, AI governance becomes siloed. Siloed oversight weakens fiduciary defensibility. Linking Committees to Executive Accountability Committee placement must connect to management structure. Boards should clarify: Which executive presents AI strategy updates? Who owns AI risk monitoring? Who reports on vendor AI exposure? How incidents are escalated to committees? | Committees without defined reporting lines are procedural, not protective. AI and the Governance Framework Committee placement is one element of structured AI governance. Within the AI Board Governance Compass , committee oversight intersects with: Boardroom Practices Organizational Accountability Stakeholder Impact Committees operationalize governance architecture. Without defined committee ownership, oversight remains theoretical. Learn more about the AI Board Governance Compass framework in our helpful guide post. Final Perspective AI oversight is about ensuring that AI exposure is visible, accountable, and structurally embedded into board processes. Boards that clearly define committee ownership strengthen governance defensibility. Boards that rely on informal delegation increase exposure. Committee clarity is not bureaucracy. It is fiduciary discipline.

  • AI Board Governance in Practice

    For our in person AI Board Governance in Practice event on 3/19/2025 we used the following resources. We also wanted to extend a huge thank you to the Silicon Valley AI Hub for hosting us and sponsoring the event. The AI Board Governance Compass Is comprised of four quadrants individual, board, organization and stakeholder. Each of them has 4 sub components for a total of 16 elements that boards should score themselves on. Associated Scoring Sheet The AI Board Governance Compass Scoring Sheet is designed to help you evaluate where your board needs to focus when it comes to AI. The sheet will open in Google Docs, go to File > Make a Copy so you can edit it. AI Board Governance Compass Scoring Sheet Preview Case Study DataFlo Solutions 2.0 Use the case study to apply the AI Board Governance Compass in a small group setting. Act as you are on the board of DataFlow Solutions. You can download a printable version of Case Study 2.0 below: Event Slides We were honored to give away 4 copies of Telle Whitney's book, Rebooting Tech Culture . If you weren't one of the winners it is available for purchase . Telle Whitney also highly recommended checking out Nora Denzel's Substack: Nora's Top 6 Recommendations for Board Members to Become more Familiar with AI For additional information on applying this to your board, download our white paper on the AI Board Governance Compass .

  • Ethical AI Agents in Customer Journeys: A Practical Framework

    Author:   Pooja Kashyap  is a technology enthusiast and Conversational AI Evangelist at Conversive AI , operating at the intersection of academia and industry while shaping thought leadership across the messaging and AI ecosystem. She is also a marathon runner with nearly 40 races completed since 2016, bringing endurance and curiosity to both her work and life. Prompt-generated image via Gemini In 2018, Amazon quietly scrapped an AI hiring tool  it had spent years building. The reason was the system had taught itself to penalize resumes from women. It had been trained on a decade of historical resumes, predominantly from men, and learned to replicate the pattern.   What strikes me most about that story isn't the bias. It's the invisibility. Nobody intended it. Nobody caught it early. The women whose resumes were silently downgraded never knew it happened. They just didn't get the call. And the person who built the system went home.   That was 2018. The scale is now incomparably larger. AI agents recommend products, approve credit, resolve complaints, and personalize experiences across millions of customer journeys simultaneously. The Amazon case was a contained internal failure. What we're building now is systemic, customer-facing, and moving at a speed that outpaces most organizations' ability to audit it. In an AI-saturated world, capability will be abundant while trust will be scarce. And the people who will feel the gap most acutely are the ones our systems were never quite designed for. This piece won't just name what's broken. It offers a working framework,  five fault lines one accountability model  a sequence any team can act on  built for practitioners who are done waiting for someone else to fix it. The Five Ethical Fault Lines AI agents don't fail loudly. They drift. They compound small blind spots into systemic issues. And because the failure is gradual and invisible, it tends to be the most marginalized people who absorb it longest before anyone with authority notices.  Here are the five fault lines that matter most not just as business risks, but as moral ones.  1. Algorithmic Bias  AI learns from historical data. If that data reflects past inequities, and it almost always does,  your agent quietly encodes them. The   2025 Stanford HAI AI Index Report   identifies bias as one of the most persistent and least resolved challenges across AI deployments.  A credit model that systematically disadvantages certain demographics. A support bot that under-serves non-English speakers. A recommendation engine that over-serves high-income segments. None of these are intentional. All of them are choices, choices embedded in what data we selected, what outcomes we optimized for, and what questions we forgot to ask before launch.  The Amazon hiring case wasn't a rogue algorithm. It was a reflection of a decade of real hiring decisions. The system learned what the organization actually valued, not what it said it valued.  The operational fix is treating fairness as a living metric and so tracked, owned, reviewed with the same cadence as revenue. But the deeper fix is asking, before we build:  Whose experience are we not designing for? If that question doesn't have an answer, we're not ready to build.  Bias tells a system what to value. But what happens when no one, not even your own team, can explain what the system decided? That's where the second fault line opens. 2. The Black Box Problem  A customer is denied a service. They ask why. Nobody on your team can tell them. That's not a transparency gap. That's a power gap.  The   EU AI Act   mandates explainability for high-risk AI systems. But regulation is trailing reality. Customers facing consequential AI decisions today, on credit, insurance, healthcare access, customer service prioritization, often have no explanation and no appeal. For communities already navigating systemic disadvantage, that's not a UX problem. It's a justice problem.  Explainability isn't a technical feature you bolt on. It's a design commitment you make at the start. If your team can't articulate how your AI arrives at a decision that affects someone's life or financial standing, that system isn't ready to make that decision.  3. The Consent Illusion  Most customers 'consent' by clicking terms they never read. That's legally sufficient. It stopped being ethically sufficient a long time ago.  A   Pew Research study   found that 79% of Americans are concerned about how companies use their data, and most feel powerless to stop it. That powerlessness is not evenly distributed. It falls heaviest on people who have the fewest alternatives, the least legal recourse, and the least time to navigate the fine print. That powerlessness has a face. It's more often a woman, a non-native speaker, or someone without the time or legal access to fight back. AI agents are data-hungry by design, they learn continuously. And without intentional boundaries, personalization becomes surveillance. The standard we should hold ourselves to isn't GDPR minimum compliance. Would I feel comfortable if the person whose data this is could see exactly how we're using it? If the answer is no, the architecture needs to change.  4. The Autonomy Trap  Modern AI agents don't just answer questions. They modify subscriptions, issue refunds, adjust pricing, and trigger account changes. The AI Incident Database   catalogs hundreds of real-world cases where autonomous systems caused unintended harm, financial, reputational, and personal. What's consistent across those cases isn't malicious intent. It's overconfidence in the system's ability to understand context it wasn't designed for.  Autonomy increases efficiency. It also increases irreversibility. And when AI acts incorrectly on a high-stakes decision, a fraudulent refund denial, an erroneous account suspension, a mishandled complaint from a customer in distress, the cost isn't just operational. It's relational. It's the moment someone decides a company doesn't deserve their trust anymore.  The principle from NIST's AI Risk Management Framework  is worth internalizing:  calibrate autonomy to consequence. The higher the stakes and the less reversible the action, the more humans need to be in the loop not because AI isn't capable, but because accountability needs a face.  5. Persuasion vs. Manipulation Created via Gemini AI systems know when you're most likely to convert, what framing induces urgency, and how emotional context shapes decisions. That's a form of power that most users don't know is being exercised. The FTC has begun issuing guidance   specifically targeting AI-driven practices that exploit psychological vulnerabilities.  Personalization that genuinely serves a customer and manipulation that extracts from them can look identical from the outside. The distinction lives in intent, in data, and in the honest answer to a question we should all be asking internally:  Does this interaction benefit the person it's directed at, or does it benefit us at their expense?  If you can't document the answer, you don't have a personalization strategy. You have a liability waiting to be named.  Accountability Architecture Is the Real Differentiator Across all five fault lines, a pattern emerges. Ethical failures in AI are almost never failures of individual malice. They're failures of accountability architecture, nobody owned the question, so nobody answered it.  Map these fault lines to ownership, and the ethical gaps become manageable. Here's what mature accountability actually looks like across each one.   Fault Line The Accountability Question What Maturity Looks Like Algorithmic Bias Who owns fairness metrics alongside revenue KPIs? Monthly bias audit with named accountable lead Black Box Decisions Who can explain a consequential AI decision to the customer it affected? Plain-language rationale available on request Data Privacy Who signs off on data minimization and genuine opt-outs? Opt-out paths that don't degrade core service Autonomy Thresholds Who defines and enforces limits on irreversible actions? Human checkpoint required for high-stakes actions Persuasion vs. Manipulation Who audits whether AI interactions serve the user or extract from them? Documented internal standard, reviewed quarterly   Regulation Is Converging. Governance Is Becoming Strategy. The   EU AI Act   applies to any company serving EU residents regardless of where it's headquartered. It establishes a structured risk-based framework and mandates human oversight, documented conformity assessments, and explainability for high-risk AI systems including credit scoring, employment decisions, and personalization at scale. For practitioners, the strategic read is this: the organizations building for these standards today will face less regulatory shock later.  Prompt-generated image via Freepik More importantly, they'll have developed the governance reflex that their competitors are still scrambling to learn. Treat it as an ethical floor, not a compliance ceiling.  If you're interested in the intersection of AI governance and board-level accountability, Anthropic's published model specification   is a rare public example of how an AI organization articulates design-level ethical intent. It's worth reading regardless of whether you use their products.  Regulation tells you the floor. It doesn't tell you how to build. Here's the sequence that actually works in practice. Ethical AI Is Not a Milestone. It’s an Operating Discipline. Building ethical AI isn't a phase you complete, it's a discipline you embed. Based on the NIST’s AI Risk Management Framework   and practitioner experience across AI deployments, the practical sequence looks like this:  1) Define the boundaries first.  Before deciding what the agent will do, be explicit about what it won’t do. Name the hard stops like regulated advice, sensitive data use, manipulative behavior, out-of-scope decisions. Write them down in plain language and align legal, product, and compliance. When something breaks, this becomes your accountability standard. 2) Instrument fairness before launch.  Don’t treat fairness as a value, define it as metrics. Track resolution rates by segment, error rates by geography, offer distribution across groups. Set thresholds and assign one clear owner, if it’s not measured and owned, it won’t be managed. 3) Design escalation as deliberately as you design the AI.   Automation needs human checkpoints by design, not by accident. Decide what triggers escalation like low confidence, high impact, user frustration. Define how context transfers so humans aren’t starting cold if this isn’t specified early, it won’t work under pressure.  4) Monitor continuously, not just at launch .  Launch isn’t the finish line, it’s the starting point. Bias drifts, users change, regulators reinterpret. Build audits, feedback loops, and red-team reviews into operations. Ethical AI is an ongoing discipline, not a release-day checkbox. Building AI Systems People Can Trust Transparency isn't vulnerability. Customers who trust your AI share more data, accept recommendations more readily, and stay longer. But more than that, people who are treated fairly by AI systems are people who weren't failed by us.  There's a business case for all of this, reduced regulatory risk, stronger retention, lower churn, better talent attraction. Those arguments are real and worth making in every boardroom conversation.  For more check out our AI Board Governance Framework . It's designed to help you have these conversations. But for those of us building these systems, I think there's a more direct obligation that doesn't need a business case to justify it.  The woman whose loan application was silently downgraded by a biased model doesn't know we built it. The customer whose emotional distress was used to trigger an upsell doesn't know it was automated. The person who got a worse outcome because of a zip code or a name doesn't know there was a system involved at all. We know. We built it. That means something.  The future of AI won't be defined by which organizations deploy the most agents. It will be defined by which ones can look at their systems and say, the people this touches, including the ones it got wrong, were treated with fairness and honesty.  That's not a compliance posture. It's a practitioner's responsibility.  If this resonates and you're navigating these questions inside your own organization I'd love to hear how you're approaching it, connect with me on LinkedIn  or reach out through Conversive AI , where we work with organizations navigating exactly these questions.

  • How to Integrate AI Into the Board Agenda: A Practical Guide for Directors

    Artificial intelligence is increasingly influencing strategy, operations, and enterprise risk. Yet many boards still treat AI as an occasional update rather than a recurring governance topic. If AI affects enterprise value, it belongs on the board agenda. The question is not whether AI should appear on the agenda. The question is how. Step 1: Clarify the Purpose of AI Discussions AI conversations can drift into technical detail unless the board defines its governance objective. AI should appear on the agenda in one of three contexts: Strategy How is AI shaping competitive positioning, product differentiation, or cost structure? Risk What enterprise risks are introduced by AI systems, vendor relationships, or data dependencies? Oversight and Accountability Who owns AI execution? What reporting structures exist? What thresholds require board review? When directors anchor discussions to these lenses, AI becomes a governance conversation rather than a technology briefing. Step 2: Determine Agenda Cadence AI should not be a one-time presentation. The cadence depends on company exposure: • Early-stage companies experimenting with AI may address it quarterly. • Growth-stage companies deploying AI into products may require more frequent updates. • Highly regulated industries may require structured reporting tied to risk committees. AI governance becomes durable when it is integrated into recurring agenda structures. This may include: Quarterly AI strategy updates AI risk as part of enterprise risk review Committee-level AI oversight discussions Consistency matters more than frequency. Step 3: Define Reporting Expectations Boards should be clear about what information they expect from management. Effective AI reporting may include: Overview of AI initiatives and objectives Risk assessment updates Vendor AI exposure Regulatory developments Performance metrics tied to enterprise value Reporting should focus on decision impact and risk management, not model architecture. The board’s role is oversight, not execution. Step 4: Align AI with Committee Structure AI oversight must have a home. Some boards place AI under Audit or Risk CommitteesTechnology CommitteesStrategy Committees. Others establish cross-committee coordination when AI affects multiple domains. The critical factor is clarity. AI oversight should not sit informally between committees. Directors should know, which committee reviews AI risk. Which committee reviews AI strategy and when full-board discussion is required. Step 5: Establish Escalation Thresholds Not every AI initiative requires full board review. Boards should define clear thresholds for escalation, such as: Material impact to revenue or cost structure Significant regulatory exposure AI influencing high-stakes customer decisions Expansion of AI into new product categories When thresholds are predefined, oversight becomes structured rather than reactive. Common Mistakes Boards Make Treating AI as a one-time education topic: Education is important. Governance requires recurring oversight. Allowing AI updates to remain tactical: Operational updates without strategic framing dilute board focus. Failing to assign ownership: Without defined accountability, AI governance becomes diffuse. A Governance-First Approach Integrating AI into the board agenda is not about adding more meetings. It is about embedding AI oversight into existing governance structures. Boards can begin with three practical actions: Add AI as a recurring agenda item within risk or strategy reviews. Define reporting expectations for management. Clarify committee ownership and escalation thresholds. This structured approach ensures that AI remains visible without dominating board time. For a comprehensive governance framework that structures AI oversight across individual readiness, boardroom practices, organizational oversight, and stakeholder impact, see our guide to AI Governance for Boards , which introduces the AI Governance Compass. The Governance Shift AI is not an isolated initiative. It shapes decisions across the enterprise. When intelligence influences enterprise value, it must appear within formal governance structures. Integrating AI into the board agenda is not about technical fluency. It is about fiduciary discipline.

  • How Boards Should Oversee Third-Party AI Vendors: A Governance Guide for Directors

    Artificial intelligence is increasingly embedded inside third-party software. From CRM platforms to HR systems to cybersecurity tools, many vendors now include AI features by default. In some cases, sensitive company data is being transmitted to external models without clear visibility at the board level. For directors, this creates a governance question: How should boards oversee third-party AI exposure? This is not an operational issue. It is a governance issue. Why Third-Party AI Changes the Risk Profile Historically, vendor oversight focused on data security, uptime reliability, and contractual terms. AI introduces additional dimensions of risk. Third-party AI systems may: Process sensitive internal data Influence customer-facing decisions Generate non-deterministic outputs Rely on external model providers Introduce explainability or bias concerns In many cases, AI functionality is embedded into tools management teams already use. Boards may not receive explicit updates about these integrations unless oversight structures are intentional. Third-party AI exposure is governance exposure. What Boards Should Expect from Management Directors do not need to evaluate model architecture. But they should expect structured visibility into vendor AI usage. At a minimum, boards should be able to answer: Which vendors are using AI inside our technology stack? What company data is shared with those vendors? Are we relying on external large language models? What contractual protections exist? Who internally owns third-party AI risk? Clarity matters more than technical detail. If management cannot articulate where AI is embedded in vendor systems, governance is incomplete. Due Diligence for AI Vendors Boards should confirm that vendor due diligence includes AI-specific considerations. This may include: Data Handling and Retention Does the vendor use company data to train models? Is data isolated? Is it stored outside the company’s jurisdiction? Model Transparency and Explainability Can the vendor explain how decisions are generated? In high-stakes use cases, opacity increases risk. Bias and Compliance Risk If AI systems influence hiring, pricing, credit, or eligibility decisions, are compliance reviews in place? Incident Response and Escalation What happens if an AI system generates harmful output? Is there a contractual obligation to notify the company? Traditional vendor review processes often do not address these AI-specific questions. Boards should ensure they do now. Committee Structure and Oversight Placement There is no universal answer to where AI vendor oversight should sit. Some boards assign it to Audit or Risk Committees. Others expand the mandate of Technology or Strategy Committees. What matters is clarity and ownership. Oversight should not be informal. AI vendor exposure should be part of recurring reporting, not a one-time procurement discussion. Third-Party AI and Regulatory Pressure Regulators increasingly expect companies to understand how AI is used across their ecosystem — not just internally built systems. Large enterprise customers are also asking questions about AI usage, data handling, and explainability. Boards should assume that third-party AI usage may eventually require: Disclosure Documentation Demonstrable oversight Waiting for regulatory enforcement is not a strategy. Structured oversight is. A Practical Governance Approach Boards can begin with three concrete steps: Request a mapping of AI-enabled vendors in use across the company. Clarify which executive owns third-party AI oversight. Integrate AI vendor exposure into recurring risk reviews. This does not require technical fluency. It requires governance discipline. For a structured board-level methodology for AI oversight, see our guide to AI Governance for Boards , which introduces the AI Governance Compass framework used by private and growth-stage directors. The Governance Shift Technology procurement is no longer just about cost and performance. When AI is embedded into third-party tools, it becomes a decision-shaping capability operating inside the company’s ecosystem. Boards do not need to micromanage vendor selection. They do need to ensure: Visibility Accountability Reporting Escalation pathways Third-party AI is not external risk. It is enterprise risk. And enterprise risk belongs under board oversight.

  • AI Landscape in 2026: What Leaders Still Don’t See Coming

    Panelists Speaking on the AI Landscape in 2026 Last night WOMEN x AI hosted a conversation titled: AI Landscape in 2026: What Matters and What Doesn’t.   This was not a trends panel. It was a judgment panel. The room was full of founders, investors, operators, and executives trying to answer one question: What deserves our attention now — and what is noise? Here’s what we discussed: 1. AI Is Moving From Interface to Infrastructure Many leaders still treat AI like a feature. A chatbot. A co-pilot. An enhancement layer. But the real shift underway is deeper. AI is becoming execution infrastructure. The organizations that will win in 2026 are not the ones with the flashiest demos. They are the ones embedding AI into core workflows, decision systems, reporting structures, and governance models. Intelligence is getting cheaper. Accountability is not. Accuracy is no longer a sufficient metric. Leaders need: Reproducibility Constraints Reversibility Auditability Model updates do not fix broken pipelines. 2. Adoption Is a Cultural Decision, Not Just a Technical One The delta between early and late adopters is widening quickly. Most organizations allocate 5–10% of time for AI upskilling and experimentation. The leaders on stage at this event argued that high-velocity organizations are closer to 50% structured exploration time . That requires something most companies still struggle with: Psychological safety. At the end of the day, AI is not a purley technical challenge, it's also a human challenge. A change management challenge. AI adoption accelerates when: Usage is transparent, not hidden Performance drives adoption, not mandates Peer pressure outpaces executive memos If employees are secretly using AI because leadership hasn’t formalized it, governance has already fallen behind behavior. If you need help overseeing AI in your organization check out our AI Board Governance Compass . 3. Responsible AI Is Operational, Not Philosophical We heard this clearly from leaders building in regulated environments: Guardrails cannot live at the final output layer. They must exist at every touchpoint. Guardrails should be added for these stages: Input validation Data storage Tool permissions Memory management Role-based access for persistent agents Operational trust is becoming more strategic than model sophistication. Especially in high risk verticals like healthcare, where regulatory gray areas are accelerating innovation while safety incidents are increasing scrutiny around mental health bots, bias exposure, and liability risk. Markets are beginning to reward ethical, transparent AI builders. 4. The Shift From Retrieval to Action Retrieval-augmented generation is plateauing. Agentic systems are rising. The shift is from information retrieval to execution. AI that drafts is table stakes. AI that acts is the frontier. But “vibe coding” is not enterprise readiness. Engineering principals still matter. Even for non-technical builders. Leaderboards and benchmark scores are becoming less relevant than infrastructure depth and governance maturity. 5. Enterprise Strategy Must Be Reframed One of the most powerful moments in the discussion was this reframe: Stop asking:“How much headcount can we reduce with AI Start asking:“How can AI make our workforce more capable?” Expansion over contraction. Enablement over replacement. AI as performance enhancement, not workforce reduction. The companies that treat AI as augmentation infrastructure will outperform those who treat it as cost-cutting software. 6. Women in AI Leadership: Community as Advantage We also discussed the gender dimension. Women remain underrepresented in AI-building rooms. Layoffs have disproportionately impacted female-dominated functions. Yet awareness is increasing. And here’s what matters: Community accelerates capability. It changes who sees themselves as builders, not just adopters. What 2026 Will Reward 2026 will not reward hype. It will reward: Infrastructure thinking Governance maturity Cultural readiness Operational trust Responsible execution The question is no longer whether to adopt AI. The question is whether your organization is structurally prepared for it. And that is a leadership decision.

  • Upskill in AI: How to Create Your Personal AI Toolkit

    Prompt-generated image via ChatGPT By Moha Shah , Venture Capital Leader & Innovation Operator | AI, Mobility, Climate, & Fintech/Insurtech | Fortune 100 to Startups In the World Economic Forum’s 2025 Future of Jobs Report , 85% of employers surveyed plan to prioritize upskilling their workforce. New AI tools and AI-first startups are launching each week, and it can feel overwhelming. During the latest   Generative AI wave , tools such as ChatGPT, Claude, Perplexity, and Sora are sparking discussions across work and college campuses. The most common question that I hear from my network is: How do I begin learning about AI? In my experience working with business leaders, technical experts, and startup founders, I believe that anyone can learn to become AI-native. Everyone’s journey with AI tools will be personal. Learning about and using new Gen AI tools are skills that can be developed over time. It takes patience and a learner’s mindset to approach these tools with curiosity.  My AI Journey My passion for technology was shaped when I learned how to code during my formative years. I’ve witnessed how technology shifts have transformed the way we communicate, conduct business, and learn. During my professional career, I’ve led strategic initiatives at a global Fortune 100 financial services company focused on digital transformation and emerging technologies, including AI. I’ve also worked with early-stage startups at a Silicon Valley-based venture capital firm and leading accelerators, which kept me at the forefront of emerging trends and technologies. As a lifelong learner, I enjoy learning and sharing my perspectives with others. In previous WOMEN x AI blog posts, I’ve explored how AI is transforming different industries, such as: Education: “ How AI Is Reshaping Education ” Insurance: “ Insurance 3.0: The Future of Insurance Powered by AI ” Mobility: “ How AI Is Transforming the Future of Mobility ” However, the recent Gen AI wave prompted new questions for me: How could I stay current with emerging Gen AI tools and trends? How should I upskill as an experienced professional?  Create Your Personal AI Toolkit This post offers resources to help you build your personal AI toolkit. Each person’s learning journey will be different. You can tailor your toolkit with different Gen AI tools in the market based on your interests. For example, media enthusiasts might explore video generation tools such as Google’s   Veo  or OpenAI’s   Sora . Those interested in AI agents might experiment with   n8n  or   LangChain .  Try Prompting: Start With One Gen AI Model Photo by Solen Feyissa on Unsplash The AI ecosystem has its own nomenclature. You’ve likely heard terms like prompt, large language model, AI agent, and evals. For example, a prompt is what you input into Gen AI tools such as OpenAI’s ChatGPT, Google’s Gemini, or Anthropic’s Claude to generate a response. As a starting point, I encourage you to try prompting with at least one Gen AI model. There are different AI prompts – zero-shot, role-based, contextual, among others – that you can learn about in MIT Sloan Teaching and Learning Technologies’   guide . As you experiment with different Gen AI models, it’s important to review each platform’s terms of service to understand how your prompts (work or personal) and data may be used to train their large language models.   Join a Community Photo by Moha Shah Join a community or group that works for you. Organizations such as   WOMEN x AI  and   Women Applying AI  attract women with diverse backgrounds who wish to lead, learn, and teach about AI within a community of peers. Also, meetup groups such as the   GAI Insights’ GenAI Learning Lab  bring together both learners and experts from the Gen AI community each week. If you’re a current college student, take advantage of campus resources like your career services office or student clubs focused on technology or entrepreneurship. For example, a student club called   Sundai  brings together current students across Harvard and MIT each Sunday to build AI prototypes. Whether you’re a student or industry professional, it is helpful to find an organization that works well for you and supports your goals to learn about AI. Take an Online Course Photo by Nick Morrison on Unsplash If you have a busy schedule, it can feel like a big commitment to take a course to upskill in AI. Fortunately, many online courses with reasonable time commitments enable you to learn about different AI topics from prompting to AI agents. Many AI resources (free to paid) are available online based on your personal budget. If you’re curious about AI agents, Armand Ruiz offers a complimentary   ten-day course ; Armand served as the vice president of IBM’s AI platform. If you want to learn how to use Microsoft Copilot or OpenAI’s ChatGPT,   GAI Insights  currently offers free sessions each week. Also, communities such as WOMEN x AI and Women Applying AI host online sessions and in-person events to discuss different AI tools and trends. Learn on YouTube  YouTube, launched in 2005, continues to be a great platform for creators to showcase their expertise. Several YouTube channels and videos that I’ve enjoyed as part of my own AI learning journey. GAI Insights’ Daily AI News Boston-based industry analyst firm,   GAI Insights , hosts a 30-minute online show, Daily AI News , with GAI Insights’ industry experts analyzing the latest AI news, trends, and tools. The show airs weekdays at 7:30 a.m. ET and can be joined via   YouTube  or   LinkedIn . Online Tutorials on Replit  If you want to learn about using low-code to no-code AI platforms such as   Replit , there are videos such as “ Getting Started on Replit ” hosted by Matt Palmer, Head of Developer Relations at Replit. Tutorials and FAQs on Popular Gen AI Models You can learn how to use Gen AI tools released by companies such as   Anthropic ,   OpenAI , and   Google  via tutorials or FAQs on their websites. As with any online content, it may take some trial and error to find videos that align with your learning style and goals.   Follow AI Experts and Leaders on LinkedIn LinkedIn is an excellent platform to follow AI leaders and experts. I recommend following people across different industries and career stages to learn their perspectives about AI. I personally appreciate posts from tech CEOs, venture capitalists, academic experts, and early-stage startup founders building AI-first ventures. For example,   Daniela Amodei  (Anthropic President),   Paul Baier  (GAI Insights CEO),   Aaron Levie  (Box CEO),   Jenny Kay Pollock  (WOMEN x AI Co-Founder),   Ramesh Raskar  (MIT Professor and MIT Media Lab Director),   Elias Torres  (Agency Founder), and   Alison Wagonfeld  (Google Cloud CMO) are among LinkedIn users who offer perspectives on emerging AI tools and trends. Beyond individuals, you can also join public groups on LinkedIn such as   Artificial Intelligence  or   Artificial Intelligence Pivot . Subscribe to AI-Focused Newsletters and Podcasts To build AI literacy, newsletters are an excellent resource to stay ahead of AI tools, startups, and trends. Some newsletters that I recommend include   The Rundown AI ,   Axios AI+,  and   AI Secret . Many venture capital (VC) firms also publish newsletters focused on their latest investments in AI startups. Newsletters from   Madrona Ventures ,   Underscore VC , and   Andreessen Horowitz  (a16z) are among those that I like reading.    Podcasts are another way to stay informed about AI news and trends; you can listen to them during a walk or your commute to work. Some podcasts that I enjoy include   WOMEN x AI Podcast ,   Me, Myself, and AI ,   NVIDIA AI  Podcast , and   Superhuman AI: Decoding the Future. Key Takeaway: Your AI Toolkit Will Evolve During your learning journey, new AI tools and services will be launched. Some may gain long-lasting traction while others will be short-lived. Embrace experimenting with new AI tools, and don’t be afraid to ask others in your network for guidance. Your AI toolkit will evolve, which is to be expected. As AI reshapes the future of work  and learning, it’s important to begin building your skills and personal AI toolkit.

  • How AI Agents Will Reshape the Future of Work in 2026

    Prompt-generated image via Freepik By Moha Shah , Venture Capital Leader & Innovation Operator | AI, Mobility, Climate, & Fintech/Insurtech | Fortune 100 to Startups The Boston Consulting Group forecasts that the market size for AI agents will reach   $52.1 billion  by 2030. This signals that AI agents will transform the future of work by autonomously completing workflows with orchestration across business functions and teams. While AI tools such as Claude, ChatGPT, and Gemini 3 Pro have dominated workplace adoption over the past two years, AI agents will enable organizations to operate with greater agility in 2026 by augmenting human decision-making and reducing manual workflows. What Are AI Agents? According to   Google Cloud , AI agents can “pursue goals and complete tasks on behalf of users. They show reasoning, planning, and memory and have a level of autonomy to make decisions, learn, and adapt.” Popular AI tools like ChatGPT or Claude require humans to prompt them to generate an output, but AI agents operate differently. They can complete a single task or multi-step workflows autonomously without human direction. For example, a customer service AI agent can answer a call, verify the caller’s identity, identify a billing error, process a refund, update the billing system, and transcribe a summary, all without human intervention.  How AI Agents Will Transform Work Photo by Mohamed Nohassi on Unsplash How exactly will AI agents reshape work? Today, AI agents are completing workflows autonomously from transcribing meeting notes to analyzing and compiling unstructured data for executive presentations. A popular notetaking application,   Otter AI , uses AI agents to transcribe notes and automatically send summaries to meeting participants. Visa is working to   enable AI agents  to shop and pay for purchases on their cardholders’ behalf. These examples illustrate AI agents’ expanding role across different industries. This post examines this transformation through insights from VCs, enterprise leaders, startup founders, and my experience working in financial services and with technology startups. What Do Venture Capitalists Think About AI Agents? Through my experience working at a Silicon Valley-based corporate venture capital firm, I have heard pitches from numerous startup founders who are solving unique problems using emerging technologies. In the current Gen AI era, leading VCs believe that AI agents will transform work across different industries by automating workflows, creating hybrid human-AI partnerships, and forcing legacy industries to modernize their IT infrastructure. Below are insights from top VCs.  Andreessen Horowitz (a16z)  | This Silicon Valley-based firm invests in early-stage software startups across different verticals, including AI, consumer, and healthcare. During a   December 2025 podcast , a16z Partner   Angela Strange  predicts that legacy industries like banking and insurance will “fix their plumbing” or outdated IT infrastructure to capture value from AI, improve the customer experience, and prevent revenue loss from siloed data management. Glasswing Ventures | This Boston-based VC firm invests in early-stage startups across AI and frontier technologies. The firm recently published its   Five-Stage Agent Autonomy Framework , which distills the AI agent revolution across three dimensions – impact, capability, and cognition. Madrona | This Seattle-based VC firm invests in startups across the AI tech stack. During the firm’s   2025 IA Summit,  panelists addressed AI agents’ impact on work. Madrona’s   blog post  stated that AI agents won’t replace humans; instead, hybrid human-AI teams will emerge to enable “organizational intelligence work that was previously impossible at scale.” Across these perspectives, AI agents will enable transformation – they will drive better customer experiences, foster hybrid collaboration with humans, and prevent revenue loss. However, success depends on enterprises investing in modern IT and data infrastructure. Angela Strange’s perspective on fixing IT systems in legacy industries resonates with my experience working at large insurance companies. Outdated IT infrastructure and disparate data sources hamper innovation – from new product development to integrations with AI-first startups.  How Are Enterprises Adopting AI Agents? Photo by Igor Omilaev  on Unsplash Enterprises are moving from popular AI tools like ChatGPT to AI agents that streamline workflows. According to   Moveworks’ 2025 survey  of 200 U.S. IT executives at companies generating $1B+ in annual revenue, 91% reported that non-technical employees were driving AI initiatives. Technical leaders are equally bullish on AI agent adoption. In a 2025   survey of 500 technical leaders  conducted by Anthropic and research firm Material, 81% plan to use AI agents to tackle more complex use cases in 2026. These findings signal a workplace transformation where AI agents will work alongside humans. Momentum around AI agent adoption is rising at leading companies. During   GAI World , a Gen AI conference held during Boston AI Week in September 2025, executives from Bain to Blue Cross Blue Shield of Michigan shared how they’re building and deploying AI agents to automate workflows across their organizations. During panels and keynotes – including insights from MIT Professor and MIT Media Lab Director   Ramesh Raskar  – executives and startup founders indicated that the adoption of AI agents will accelerate in 2026 and beyond. See my recap here: “ Top Takeaways from Boston AI Week 2025 .” How Do Startup Founders Think AI Agents Will Transform Work? Photo by Mimi Thian on Unsplash Startup founders are at the front lines of building innovative products and services, making their perspectives valuable for understanding how emerging technologies are being adopted. I recently connected with several founders working on agentic AI to learn how they believe AI agents will reshape the future of work. Here’s what they shared:  Adrienne Murga , Co-Founder of   eLLMo AI , reflected, “Org charts get thinner, capability layers get thicker: You will see fewer layers of coordinators and more centralized platforms that distribute agent capacity across the company. The differentiator will be how fast an org can deploy, govern, and improve workflows.” Georgia Liu , Founder of   Causena , said, “AI agents will reshape the future of work by making causal reasoning—once slow, expensive, and inaccessible—available at scale. As governments and institutions face rising complexity, the real value of AI will be in revealing why outcomes happen, not just summarizing what people say. This shift enables better policy, safer systems, and more accountable decisions.”   Nidhi Bali , Co-Founder of   Nalos , stated, “The rise of agentic AI will continue to commoditize intelligence and execution, shifting differentiation to taste, judgment, and the ability to choose the right problems to solve. The future of work belongs to polymaths who bridge domains to build products and systems that meaningfully impact humanity. As builders of this technology, our responsibility is to keep it human-centered, designed to augment people, not replace them.” Thomas Arul , Founder of   CloserLook AI , noted, “AI agents are set to transform the workplace by evolving from passive assistants into autonomous ‘digital teammates’ capable of executing complex, end-to-end business processes independently. This shift will redefine human roles toward ‘orchestration,’ where employees focus on strategic oversight, creative direction, and managing ecosystems of specialized agents rather than performing routine execution.” These founders envision a future where AI agents won’t simply displace human workers. Instead, they will handle execution and allow people to focus on judgment, creativity, strategy, and human-centered design. This shift will enable organizations to become more agile, with flatter hierarchies and faster decision-making. Key Takeaways   AI agents will shift human work from execution to orchestration in 2026. Acting as “digital teammates,” AI agents will execute multi-step workflows, while people will focus on judgment, creativity, and problem selection.    Adoption is accelerating across enterprises. Moveworks’ survey shows that 91% of IT executives say non-technical employees are driving AI initiatives, while Anthropic's joint survey with Material revealed that 81% of technical leaders plan to deploy AI agents for complex use cases in 2026. Enterprises must upgrade legacy IT infrastructure and data systems to successfully adopt AI agents and remain competitive. AI agents will augment workers rather than displace them. Startup founders building AI-first ventures emphasize that agents will manage execution, while people focus on higher-order work. Enterprises will become leaner and more agile as they adopt AI agents. Flatter organizations will enable quicker decision-making and more effective governance of AI-powered workflows.     Additional Resources to Learn About AI Agents Below are resources that showcase additional insights, trends, and research on AI agents.   Introductory Overview IBM | “ What Are AI Agents? ” – A basic primer on AI agents – how they are defined, how they work, and use cases.   State of the Industry LangChain | “ State of AI Agents ” – Analysis from LangChain, a well-regarded open-source framework for developing apps with LLMs, covers insights on the current state of agentic AI and its use cases. Academic Innovation & Research MIT |   Project NANDA  – Launched in 2025 by MIT Professor and MIT Media Lab Director Ramesh Raskar, Project NANDA aims to build a more open, decentralized infrastructure for AI agents. WOMEN x AI Community Insights WOMEN x AI | “ Top 10 Takeaways from ‘Beyond the Hype: Agentic AI’ With WOMEN x AI ” – Insights from startup founders and product leaders working with AI agents during a virtual panel in August 2025. WOMEN x AI | “ The Future of Work in the New Era of AI ” – Perspectives from Boston-based startup founders building AI-first ventures and my own insights.

  • A Year of AI: Women x Innovation

    What We Learned from 12 Months of Collaborating with AI Last week, the WOMEN x AI community gathered at the Snowflake Silicon Valley AI Hub for A Year of AI: Women x Innovation  — a reflection on what actually changed in AI in 2025, and what leadership will require in 2026. It was a Thursday night right before the holidays and the room was packed. That alone said a lot. But what stayed with us most was not the tools. It was the tone of the conversation. Grounded. Curious. Human. 2025 Was the Year AI Settled In We opened the night with a simple truth. 2025 was not the year AI arrived. It was the year AI settled in. Into our work. Into our classrooms. Into our boardrooms. Into our everyday decisions. This was the year curiosity turned into capability. When multimodal AI became normal. When building sped up. When AI governance stopped being theoretical. By the end of the year, the question was no longer “Should we use AI?”It became “How do we use AI well?” And more importantly, “Who gets to lead that conversation?” Handing the Mic to Women in AI At WOMEN x AI, our belief is you build the future you want by deciding who gets the mic. In 2025, we handed 234 microphones to women , creating space not just to learn AI, but to lead with it. Across workshops, panels, walks, and community sessions, our members showed up as founders, operators, investors, technologists, and board leaders. By the end of the year, that community grew to over 700 global members , with nearly 3,000 attendees  joining our events. This night felt like a culmination of that work. The Tools That Changed Our Workflow Photo Credit: Elena Pronina The panel, moderated by Lori Adams Brown, focused on The Tools That Changed Our Workflow . But what made it powerful was not a list of products. It was discernment. Founders and operators shared: How they evaluated what was worth keeping versus what was hype What actually changed the way they worked, not just how fast The moments when AI surprised them, failed them, and taught them something Again and again, the conversation returned to the same theme: A human centered approach matters. Speed without trust breaks systems. Scale without context creates risk. Tools do not replace leadership. AI Governance Moves From Theory to Practice One of the most resonant moments of the night was the AI Board Governance Compass   mini-segment. In 2025, governance stopped being a future concern. It became operational. Boards are now expected to understand AI risk, usage, and accountability. Not in abstract terms, but in real decisions about data, privacy, IP, and responsibility. Our AI Board Governance Compass was built to meet that moment. To give boards a practical framework for prioritizing AI topics, asking better questions, and guiding decisions with clarity. The response in the room made it clear. Leaders are hungry for tools that help them govern AI, not fear it. Seeing AI Built by Our Community Founder demos brought the conversation to life. We saw products in action. Tools built by members of this community to solve real problems, with humans at the center. It was a reminder that innovation does not happen in isolation. It happens when builders are supported, seen, and trusted. We saw Arianna Mauri founding CX Lead at Blok demo their software solution for smarter product decisions, powered by simulation. Next was Ana Ramirez , Founder of Xalé . She showed us how her software creates dynamic profiles for job seekers and employers. Photo Credits from Left to Right: Elena Pronina , Marisol Espinoza Check Out the Event Slides Looking Ahead to 2026 We closed the night with Rachel Wong  and Tany Rios Castro  leading a session on setting 2026 AI Resolution . Photo Credit: Marisol Espinoza Not a vague goal. A real intention. A tool to learn. A governance practice to adopt. A mindset to shift. Our takeaway from the night was clear: 2026 is not about catching up to AI. It is about leading with AI. The edge will belong to those who do it with a human centered approach. Gratitude This night would not have been possible without: Snowflake  and the Silicon Valley AI Hub  for hosting and creating space for community LatinaGeeks  for being such wonderful partners and for co-creating community with us Lori Adams-Brown , our panel moderator, for grounding the conversation in leadership and culture Our panelists and founders for sharing real insights And the WOMEN x AI community, for showing up with curiosity, generosity, and intention As we head into 2026, we are more convinced than ever that the future of AI will not be led by those who move the fastest. It will be led by those who move thoughtfully, with a human centered approach.

Search Results

bottom of page