Internal AI Assistant —
from 0 to 12,000 users.
HP's internal AI tool launched globally and stalled. I was brought in as Product Manager to diagnose why, rebuild the strategy, and drive adoption that actually stuck.
The tool was live.
Nobody was using it.
HP's Digital & Transformation Office rolled out an internal AI assistant to support thousands of employees globally — document access, content generation, task automation. Post-launch metrics told a different story: low adoption, inconsistent usage, and unclear satisfaction. The hardest enterprise challenge: getting people to actually use the tools the company paid for.
Employees opened the tool once and left. The assistant had no awareness of who was using it, what they needed, or how to guide them to a useful outcome. Generic UI, generic responses — and word spreading through the org that it "doesn't work."
Brought in as Product Manager at the intersection of user research, AI capabilities, and business outcomes. My mandate: diagnose the failure, build a strategy, and get this product to a place where people actually wanted to use it every day.
- Increase adoption and sustained engagement across departments
- Align assistant capabilities to real Jobs-To-Be-Done — not marketing promises
- Define a product roadmap with short-, mid-, and long-term horizons
- Drive alignment across Engineering, Legal, Security, and Business Leadership
Discovery first.
Delivery second.
Before touching the roadmap, I needed to understand why adoption had stalled. That meant starting with real users — not assumptions, not stakeholder opinions about what employees wanted.
Problem Framing & Discovery
1:1 interviews with employees across Sales, Marketing, HR, and R&D. Mapped all assistant touchpoints and friction areas using a service blueprint. Facilitated stakeholder workshops to align on constraints. Identified the critical Jobs-To-Be-Done the assistant could meaningfully solve — and the ones it couldn't.
Experience Audit & Benchmarking
Audited current flows and the full prompt experience — onboarding, query UX, output clarity, and feedback loops. Benchmarked against Microsoft Copilot and Notion AI to identify capability and UX gaps. Defined usability KPIs: task completion time, satisfaction score, repeat usage rate.
Product Strategy & Roadmap
Built a product roadmap across three horizons. Prioritized use cases by value and technical feasibility. Proposed a modular AI architecture enabling future prompt packs, department personas, and scalable rollout — without rebuilding the core product each time.
Cross-Functional Delivery
Led sprint planning with Engineering and Legal to navigate AI safety, data access controls, and security requirements. Wrote product specs and user stories grounded in prompt engineering logic. Introduced feedback loops via in-product surveys and usage analytics from day one.
Pilot, Learn, Iterate
Deployed to 200 employees in a structured pilot. Captured qualitative and quantitative data simultaneously. Iterated on smart prompt suggestions, department persona modes, and guided onboarding flows — then prepared for full-scale rollout to 12,000+ employees.
From one-time experiment
to daily workflow.
The before/after wasn't just UX. It was how employees experienced the tool from first contact through sustained use — and whether they recommended it to anyone else.
Discovery
Generic email announcement, no context. Unclear value proposition. Employees felt skeptical and didn't know what to do with the tool.
First Interaction
Generic interface, no guidance on what to ask. Responses felt irrelevant to their actual work. Frustration in the first session.
Attempted Usage
Multiple failed queries, no help system, output format unclear. Impatience set in. No compelling reason to return.
Abandonment
Back to familiar tools. Word spread: "it doesn't really work." Low satisfaction scores. Tool invisible in daily workflows.
Guided Discovery
Targeted department announcement with relevant use cases. Interactive onboarding from day one. Employees arrived curious, not confused.
Contextual First Use
Department persona selected upfront. Templates and prompts surfaced immediately. Output felt directly relevant and usable.
Progressive Adoption
Smart suggestions guided exploration. Users saved patterns, gave feedback, built confidence over multiple sessions.
Integration & Advocacy
Tool embedded in daily workflow. Power users sharing with colleagues, submitting feature ideas. NPS improved measurably.
Architecture for scale,
not a one-off fix.
The core insight: patchwork improvements wouldn't survive at scale. I designed a four-layer modular framework that could extend across departments without rebuilding from scratch each time a new team onboarded.
Five groups. Five agendas.
One product.
Shipping an internal AI platform at HP meant navigating five stakeholder groups with legitimate but competing priorities — and maintaining product momentum without losing anyone along the way.
- User adoption & satisfaction
- Product-market fit
- Scalable framework design
- Cross-functional alignment
- Technical feasibility
- Performance & scalability
- Development timelines
- System architecture
- Data privacy regulations
- AI liability & transparency
- Content generation risks
- Vendor agreements
- ROI & productivity gains
- Change management
- Budget constraints
- Employee satisfaction
- Data security & access
- Infrastructure requirements
- Integration complexity
- Audit & monitoring
- Ease of use & clarity
- Relevant, useful output
- Trust in AI results
- Integration into workflow
What actually matters
in enterprise AI.
The technical work was never the hardest part. These are the product lessons that transferred directly into how I build and consult today.
AI adoption is a change management problem as much as a product problem. You can build a great tool and still fail if you treat human resistance as an afterthought instead of a variable in the roadmap.
Internal tools need persona-driven design, not generic functionality. A Sales user and an R&D researcher have nothing in common — building one interface for both is a deliberate product failure.
Cross-functional alignment is a core PM skill, not a soft skill. Navigating Legal, Security, Engineering, and Business simultaneously — without losing product momentum — is exactly where most internal AI projects die.
JTBD is especially powerful for AI products, where users often don't know what they want to ask for. Starting from the job, not the feature, surfaced use cases I wouldn't have found through any other method.
Modular architecture isn't just an engineering concern — it's a product strategy. Building for department extensibility from day one meant the framework survived the pilot and scaled.
The numbers moved.
So did the culture.
Quantifiable improvements in adoption and satisfaction — and a framework recognized as best practice for AI initiatives across HP's Digital & Transformation Office.
Beyond the metrics: the modular AI assistant framework I defined became a recognized best practice across HP's Digital & Transformation Office — applied to subsequent AI initiatives, not rebuilt from scratch each time.
AI tools gathering dust
in your company?
I diagnose exactly where adoption breaks — and I deliver the fix, with contractual results. 6–10 weeks. Not slides.