Permission
Governance

Copilot

Lattice of lines and dots in a geometric structure.
DESIGNING TRUST
DESIGNING TRUST
DESIGNING TRUST
DESIGNING TRUST

Creating a conversational AI that interprets policy, explains risk, and builds confidence through clarity.

01

Transforming compliance into confidence

PURPOSE
Modern enterprise app publishing requires strict oversight of which APIs and data scopes each app can access. The existing permission review process was entirely manual: fragmented across spreadsheets, forms, and subject-matter expertise. Our goal was to redesign and partially automate this governance workflow through a conversational AI assistant, reducing review time while enforcing least-privilege access.

MY ROLE
As UX Strategy & Systems Design Lead, I defined the Copilot’s information architecture, conversation logic, and probabilistic decision framework. Partnering with Technical Architects, Security and Privacy SMEs, and Engagement Managers, I transformed scattered tribal knowledge into a structured, auditable guidance system.

TEAM
1 designer (myself)
3 Copilot developers
2 Technical Architect SMEs (~2 h/week)
Rotating Security and Privacy reviewers
4-week agile sprints (Phase 2 maturity)
 
Timeline: 12-week proof-of-concept phase.

OUTCOME
A production-ready prototype that guides developers through permission selection, flags potential compliance risks, and recommends safer alternatives, cutting manual review loops and improving throughput consistency.

  • ROLEUX Lead

    CLIENTGlobal Fortune 50 Technology Company

    DATE2025

02

Challenge

Lattice-shaped abstract mountain full of lines and connecting dots.
Context
  • Permission governance was a known friction point in the internal app-publishing pipeline. Reviewers struggled to keep pace with hundreds of requests and evolving API scopes. Developers lacked clear criteria for what constituted “safe” access, leading to back-and-forth cycles and inconsistent decisions.

Core challenges
  • /01Limited SME bandwidth: experts available only a few hours weekly.

    /02Inconsistent documentation: fragmented across wikis and chats.

    /03Early AI unreliability: hallucinations and misclassifications in prior prototypes.

    /04Need for tiered risk clarity: balancing flexibility with data-protection rigor.

Success metrics
  • Approval cycleunder 10 business days from 20-30 days

Re-review rateReduced from 32% to 10%

Critical risk permissionSelection reduced from 35% to 10%

Developer self-serviceHigher self-service accuracy every step

03

Process

Phase I
Knowledge Architecture
  • SME workshops revealed the full set of workflow dependencies and failure points across the permission-evaluation process. From these sessions, I structured the domain into nested categories: Permission Risk Tiers, Role-Based Access Models, and Content Collections, so the Copilot could reason with consistent hierarchy.

    I authored a scalable “General Instructions” prompt that defined how the agent should interpret confidence thresholds, trigger escalations, and provide contextual follow-up guidance.

Phase II
Risk-tier Framework
  • We introduced a two-tier classification model separating Allowed (low-risk, pre-approved) permissions from Restricted ones requiring human review. Evaluation criteria were anchored in data sensitivity, tenant impact, and delegation scope, ensuring decisions aligned to established governance policies. To build user trust, the system included concise “why restricted” explanations that clarified the rationale behind each classification.

Simplified User Flow
Simple flow detailing the logic of the Governance Copilot.
Iterating into reality
  • VisualizingCreated system flow diagrams outlining data state transitions and model confidence handoffs between components.

PrototypingDeveloped low-fidelity prototypes in Figma and Miro to visualize conversational flow and diagram evolution.

RefinementPartnered with developers to simulate AI responses, latency, and real-time updates.

TestingTested early versions with facilitators to measure comprehension, latency tolerance, and feature discoverability.

Phase III
Prototyping
  • The prototype implemented an adaptive conversation flow: clarify, classify, recommend, that mirrored the steps SMEs take during real reviews.

    Probabilistic reasoning was made legible through confidence statements such as “likely safe” or “requires review,” helping users understand reliability at each step.

    The system was tested using anonymized historical requests, refined the logic to reduce unsupported queries, and presented the working prototype to Security, Privacy, and Architecture leads. Their feedback on tone, accuracy, and traceability informed the next-phase roadmap, including App ID lookup integration, threat-model validation, and expansion into additional governance scenarios.

05

Design Solution

A Guided, Explainable Copilot for Permission Evaluation
  • The Copilot was designed to provide a structured, explainable pathway through complex permission decisions. Its conversational UX interprets developer intent, classifies requested permissions by risk, and recommends least-privilege alternatives. The information architecture links API scopes to risk tiers and RBAC guidance through a clear hierarchical schema, while the logic framework follows a consistent pattern of:classification → confidence → rationale → resourcesto maintain alignment across responses. Supporting artifacts including a prompt map, risk-model diagram, knowledge-base schema, and sample interactions for ensured internal coherence.

    The overall system is grounded in three principles: transparency, traceability, and human-in-the-loop control.

06

Impact

Creating a unique and memorable experience
  • The redesigned permission-evaluation system modeled a reduction in re-review rates from 32% to roughly 10%, potentially freeing over 100 SME hours per month.

    By standardizing evaluation logic across teams, it brought consistency to a historically fragmented review process. The underlying framework proved scalable and was later adapted for additional internal Copilots supporting access governance and self-service scenarios. Most importantly, it contributed to a cultural shift: positioning AI not as a novelty, but as a trusted compliance partner in high-stakes decision-making.

Outcomes
  • Review Accuracy UpliftImproved permission classification accuracy by grounding the agent in a structured, tiered evaluation model, reducing false approvals and unnecessary escalations.

Re-Review ReductionCut re-review cycles by enabling developers to submit more complete, policy-aligned requests on the first attempt, reducing SME workload and cycle time.

Time-to-Decision Shortened the overall approval timeline by automating early triage steps and surfacing least-privilege recommendations instantly within the request workflow.

Policy Alignment & ConsistencyStandardized decision logic across review teams, ensuring that risk evaluations, restricted-permission checks, and required artifacts were applied uniformly across all submissions.

07

Reflection

Agents are truly 90% UX and 10% UI
  • “At enterprise scale, design’s highest value isn’t aesthetic polish, it’s structuring probabilistic systems so people can act with confidence.”

    The work demonstrated how UX design can orchestrate model behavior, human oversight, and organizational policy into a single coherent system. Future opportunities include integrating live telemetry to continually retrain the model and developing a shared design system for governance-based AI assistants.