HP AI Assistant — Case Study | Ana Zamfirache

Internal AI Assistant —
from 0 to 12,000 users.

HP's internal AI tool launched globally and stalled. I was brought in as Product Manager to diagnose why, rebuild the strategy, and drive adoption that actually stuck.

RoleProduct Manager
CompanyHP — Digital & Transformation Office
Timeline2024 – 2025
TypeEnterprise AI Product
ScopeDiscovery · Strategy · Cross-functional Delivery
+40%
AI adoption growth in Q1 through structured rollout
2.84.1
User satisfaction score out of 5, pilot group
200
Employees in the structured pilot program
12K
Employees impacted by the platform at scale
01 — Context & Challenge

The tool was live.
Nobody was using it.

HP's Digital & Transformation Office rolled out an internal AI assistant to support thousands of employees globally — document access, content generation, task automation. Post-launch metrics told a different story: low adoption, inconsistent usage, and unclear satisfaction. The hardest enterprise challenge: getting people to actually use the tools the company paid for.

The Problem

Employees opened the tool once and left. The assistant had no awareness of who was using it, what they needed, or how to guide them to a useful outcome. Generic UI, generic responses — and word spreading through the org that it "doesn't work."

My Role

Brought in as Product Manager at the intersection of user research, AI capabilities, and business outcomes. My mandate: diagnose the failure, build a strategy, and get this product to a place where people actually wanted to use it every day.

Strategic Objectives
  • Increase adoption and sustained engagement across departments
  • Align assistant capabilities to real Jobs-To-Be-Done — not marketing promises
  • Define a product roadmap with short-, mid-, and long-term horizons
  • Drive alignment across Engineering, Legal, Security, and Business Leadership
02 — Strategic Approach

Discovery first.
Delivery second.

Before touching the roadmap, I needed to understand why adoption had stalled. That meant starting with real users — not assumptions, not stakeholder opinions about what employees wanted.

01

Problem Framing & Discovery

1:1 interviews with employees across Sales, Marketing, HR, and R&D. Mapped all assistant touchpoints and friction areas using a service blueprint. Facilitated stakeholder workshops to align on constraints. Identified the critical Jobs-To-Be-Done the assistant could meaningfully solve — and the ones it couldn't.

User InterviewsService BlueprintJTBD FrameworkStakeholder Workshops
02

Experience Audit & Benchmarking

Audited current flows and the full prompt experience — onboarding, query UX, output clarity, and feedback loops. Benchmarked against Microsoft Copilot and Notion AI to identify capability and UX gaps. Defined usability KPIs: task completion time, satisfaction score, repeat usage rate.

UX AuditCompetitive BenchmarkingKPI Definition
03

Product Strategy & Roadmap

Built a product roadmap across three horizons. Prioritized use cases by value and technical feasibility. Proposed a modular AI architecture enabling future prompt packs, department personas, and scalable rollout — without rebuilding the core product each time.

Product RoadmapPrioritization FrameworkModular Architecture
04

Cross-Functional Delivery

Led sprint planning with Engineering and Legal to navigate AI safety, data access controls, and security requirements. Wrote product specs and user stories grounded in prompt engineering logic. Introduced feedback loops via in-product surveys and usage analytics from day one.

Sprint PlanningProduct SpecsPrompt EngineeringAnalytics & Feedback
05

Pilot, Learn, Iterate

Deployed to 200 employees in a structured pilot. Captured qualitative and quantitative data simultaneously. Iterated on smart prompt suggestions, department persona modes, and guided onboarding flows — then prepared for full-scale rollout to 12,000+ employees.

Pilot RolloutMixed-Methods ResearchFeature IterationChange Management
03 — User Journey Transformation

From one-time experiment
to daily workflow.

The before/after wasn't just UX. It was how employees experienced the tool from first contact through sustained use — and whether they recommended it to anyone else.

Before — Low adoption, high abandonment
Discovery

Generic email announcement, no context. Unclear value proposition. Employees felt skeptical and didn't know what to do with the tool.

First Interaction

Generic interface, no guidance on what to ask. Responses felt irrelevant to their actual work. Frustration in the first session.

Attempted Usage

Multiple failed queries, no help system, output format unclear. Impatience set in. No compelling reason to return.

Abandonment

Back to familiar tools. Word spread: "it doesn't really work." Low satisfaction scores. Tool invisible in daily workflows.

After — Engaged, embedded, advocating
Guided Discovery

Targeted department announcement with relevant use cases. Interactive onboarding from day one. Employees arrived curious, not confused.

Contextual First Use

Department persona selected upfront. Templates and prompts surfaced immediately. Output felt directly relevant and usable.

Progressive Adoption

Smart suggestions guided exploration. Users saved patterns, gave feedback, built confidence over multiple sessions.

Integration & Advocacy

Tool embedded in daily workflow. Power users sharing with colleagues, submitting feature ideas. NPS improved measurably.

04 — Modular AI Framework

Architecture for scale,
not a one-off fix.

The core insight: patchwork improvements wouldn't survive at scale. I designed a four-layer modular framework that could extend across departments without rebuilding from scratch each time a new team onboarded.

Layer 01User Interface
Persona Switcher
Role-based Onboarding
Contextual Help
Progressive Disclosure
Layer 02Prompt Intelligence
Template Library
Context Injection
Query Enhancement
Output Formatting
Layer 03Business Logic
Workflow Integration
Role-based Access Control
Department Business Rules
Approval Workflows
Layer 04Learning & Analytics
Usage Analytics
In-product Feedback Loops
A/B Testing
Performance Monitoring
05 — Stakeholder Alignment

Five groups. Five agendas.
One product.

Shipping an internal AI platform at HP meant navigating five stakeholder groups with legitimate but competing priorities — and maintaining product momentum without losing anyone along the way.

Product
My Remit
  • User adoption & satisfaction
  • Product-market fit
  • Scalable framework design
  • Cross-functional alignment
Engineering
Technical Partners
  • Technical feasibility
  • Performance & scalability
  • Development timelines
  • System architecture
Legal & Compliance
Risk & Governance
  • Data privacy regulations
  • AI liability & transparency
  • Content generation risks
  • Vendor agreements
Business Leadership
Executive Sponsors
  • ROI & productivity gains
  • Change management
  • Budget constraints
  • Employee satisfaction
Security & IT
Infrastructure
  • Data security & access
  • Infrastructure requirements
  • Integration complexity
  • Audit & monitoring
End Users
Center of Gravity
  • Ease of use & clarity
  • Relevant, useful output
  • Trust in AI results
  • Integration into workflow
07 — What I Took Away

What actually matters
in enterprise AI.

The technical work was never the hardest part. These are the product lessons that transferred directly into how I build and consult today.

AI adoption is a change management problem as much as a product problem. You can build a great tool and still fail if you treat human resistance as an afterthought instead of a variable in the roadmap.

Internal tools need persona-driven design, not generic functionality. A Sales user and an R&D researcher have nothing in common — building one interface for both is a deliberate product failure.

Cross-functional alignment is a core PM skill, not a soft skill. Navigating Legal, Security, Engineering, and Business simultaneously — without losing product momentum — is exactly where most internal AI projects die.

JTBD is especially powerful for AI products, where users often don't know what they want to ask for. Starting from the job, not the feature, surfaced use cases I wouldn't have found through any other method.

Modular architecture isn't just an engineering concern — it's a product strategy. Building for department extensibility from day one meant the framework survived the pilot and scaled.

06 — Impact

The numbers moved.
So did the culture.

Quantifiable improvements in adoption and satisfaction — and a framework recognized as best practice for AI initiatives across HP's Digital & Transformation Office.

+40%
AI adoption growth in Q1
2.84.1
User satisfaction / 5
200
Pilot users, validated outcomes
12K
Employees impacted at scale

Beyond the metrics: the modular AI assistant framework I defined became a recognized best practice across HP's Digital & Transformation Office — applied to subsequent AI initiatives, not rebuilt from scratch each time.

AI tools gathering dust
in your company?

I diagnose exactly where adoption breaks — and I deliver the fix, with contractual results. 6–10 weeks. Not slides.

Book the Audit Call →
Previous
Previous

Workforce Planning Redefined

Next
Next

Improving Employee Productivity Through UX Strategy & Design Thinking