Alyssa is a board director, investor and advisor with deep expertise in technology, governance, responsible innovation and technology transactions. She serves on the boards of AppLovin, Make-A-Wish Connecticut, the Georgetown Law Board of Visitors and the Quello Center (Michigan State) Advisory Board. She previously held executive and legal leadership roles at companies including HubSpot, Sidewalk Labs (a former Alphabet company), Harman, and Netflix and has experience developing responsible AI and responsible data governance programs for high growth companies. Alyssa also advises scaling companies and mission-driven organizations, with a focus on governance, AI, privacy, cybersecurity, ethical tech and scaling smartly. She brings a pragmatic approach to navigating complex issues and driving long-term value.
In an era when nearly every headline and investor deck mentions artificial intelligence, too many leaders are still asking the wrong questions. At our recent Masterclass, board director and responsible innovation expert Alyssa Harvey Dawson challenged members to shift from AI hype to AI readiness — offering a practical, rigorous framework any executive can apply.
Below, we break down five key lessons for leaders who want to deploy AI strategically and responsibly.
Alyssa’s opening point was unambiguous: “The wrong question is, ‘How do we use AI?’ The right question is, ‘How do we strengthen our business — and can AI do that safely?’”
Many organizations fall into the trap of chasing AI for its own sake, often driven by boardroom pressure or trend anxiety. But without a clear business problem, AI is just a solution in search of a use case. Alyssa urged leaders to move from reactive adoption to strategic alignment. Technology, she reminded the group, is only valuable when it measurably advances the business.
To ground the conversation, Alyssa offered a simple three-part test for any leadership team considering an AI initiative:
Is your data clean, organized and accessible across departments?
Can you articulate a clear, defined problem that AI is meant to solve with measurable success metrics?
Is your leadership team aligned on risk tolerance, governance, and outcomes?
If any one pillar is weak, the project is at risk of wasted investment, misalignment, or reputational damage. Alyssa’s advice: be honest about gaps. Do the foundational work first; the cost of skipping it is far higher later.
Alyssa introduced her REAL Framework, designed to keep AI decisions anchored to what matters:
R: Results-focused — Start with the business outcome, not the tool.
E: Evidence-based — Ground decisions in data, not assumptions.
A: Alignment & Accountability — Involve stakeholders across product, legal, security, and compliance from day one.
L: Long-Term Sustainability — Design for agility, iteration, and trust.
This approach prevents common pitfalls: wasted pilots, regulatory blind spots, and misused customer data. For sensitive industries, she pointed to GDPR, CCPA, and the upcoming EU AI Act as guardrails that make early governance non-negotiable.
When asked how to test new AI tools responsibly, Alyssa was clear: “Test and iterate, but don’t boil the ocean. Start with non-sensitive data. Use synthetic data when possible to protect customer privacy.”
She also urged leaders to pressure vendors on security and compliance standards. If an AI vendor can’t answer basic questions about how they store and use your data, look elsewhere. For larger firms, building custom wrappers or proprietary layers around AI models can safeguard IP and trade information.
AI is moving too fast to treat governance as an afterthought. Alyssa’s own experience designing responsible data and AI policies at companies like Netflix and HubSpot proves that embedding guardrails early accelerates innovation later.
“Bring in legal, risk, compliance, product, and engineering upfront,” she said. “When these teams build together, you design for speed and trust.”
In a marketplace where a single AI misstep can erode customer loyalty overnight, clear guardrails are not bureaucracy, but brand protection.