Meeting
Intelligence

Agent

A confluence of colorful layered waves converging inside a sphere.
LISTENING BECOMES UNDERSTANDING
LISTENING BECOMES UNDERSTANDING
LISTENING BECOMES UNDERSTANDING
LISTENING BECOMES UNDERSTANDING

Designing an AI assistant that listens, structures, and remembers, transforming team conversation into living knowledge.

01

Turning conversation into collective memory

PURPOSE
Design an AI meeting assistant that listens to live discussions, translates them into structured flow diagrams, and exports actionable tasks, turning otherwise transient conversations into persistent, shareable insight.

MY ROLE
As UX Strategy and Interaction Design Lead, I defined the product concept, agent behavior, and human-AI interaction model. My work focused on orchestrating how multiple probabilistic systems: transcription, semantic parsing, and visualization, collaborated to preserve human intention without overwhelming users with automation.

This included:
Defining the agent's behavior and translation to diagram.
Establishing feedback and confirmation loops to ensure the model’s confidence thresholds aligned with user trust.
Designing for traceability and transparency, ensuring every AI decision was visible and reversible.

TEAM
1 designer (myself), 5 AI developers, 1 technical architect, 2 operations SMEs, and 1 engagement sponsor.
 
Timeline: 8-week proof-of-concept phase.

OUTCOME
A working prototype demonstrating real-time speech-to-diagram translation with editable exports, validated through live pilot sessions and later adopted as a reference model for other enterprise workflow agents.

  • ROLEUX Lead

    CLIENTGlobal Fortune 50 Technology Company

    DATE2025

02

Challenge

Context
  • Workshop and process-mapping sessions are often high in cognitive value but low in documentation quality. Facilitators capture fragments: sticky notes, photos, whiteboards, yet the relational structure between ideas disappears.

    The challenge was to create a “living record” of these sessions: something that listens, understands, and synthesizes group dialogue into persistent, structured knowledge.

Abstract sound-like waveforms representing a mountain.
Core challenges
  • /01Translating unstructured, multi-speaker conversation into coherent hierarchies of ideas.

    /02Calibrating automation to augment, not override, human facilitation.

    /03Achieving integration across enterprise tools without disrupting existing habits.

    /04Explaining model behavior and limitations clearly to non-technical users.

Success metrics
  • Transcription accuracy~96 % transcription accuracy in controlled sessions.

Relevant, usefulPositive usability feedback from facilitators.

Stakeholder buy-inEnthusiasm and funding for future expansions.

Reduced turnaroundWith existing suite of work tools, capable of taking action

03

Process

Probabilistic experiences start with the human
  • I began by interviewing operations leads and workshop facilitators to map the full landscape of pain points across their current processes.

    From these conversations, I modeled the flow from discussion → insight → action, revealing where automation could genuinely improve speed and fidelity.

    This work established three foundational principles for the agent: assistive, transparent, and traceable, which became the guiding criteria for every design decision that followed.

Devising the flow
  • The system processes meeting input through a four-stage pipeline: voice capture, semantic parsing, flow-diagram synthesis, and task export.

    Spoken intent is ingested in real time, interpreted into meaningful entities, translated into executable Mermaid diagrams, and finally converted into actionable tasks. This pipeline demonstrates how the prototype transforms raw dialogue into structured, operational intelligence within seconds.

Simplified User Flow
Simplified flow demonstrating the pipeline of speech to diagram creation.
Iterating into reality
  • VisualizingCreated system flow diagrams outlining data state transitions and model confidence handoffs between components.

PrototypingDeveloped low-fidelity prototypes in Figma and Miro to visualize conversational flow and diagram evolution.

RefinementPartnered with developers to simulate AI responses, latency, and real-time updates.

TestingTested early versions with facilitators to measure comprehension, latency tolerance, and feature discoverability.

05

Design Solution

How the prototype worked
  • The prototype used a split-view interface that paired a live transcript with an evolving flow diagram, allowing participants to watch ideas take form in real time. As the agent ingested speech through a meeting-service hook, it parsed natural language into entities, generated nodes, and mapped relationships dynamically within a hierarchical graph.

    The design emphasized three principles: clarity through continuous visibility, control through reversible actions, and continuity through a traceable evolution of ideas.

Evolving the agent
  • The diagrams below are unedited renders generated directly by the agent during our iterative development. Each reflects a stage in how the model learned to parse procedural logic and express it as structured Mermaid code. Through targeted prompt tuning, we refined the agent’s reasoning, flow stability, and syntax discipline until it achieved over 97% first-pass.

    By the final iteration, the agent consistently produced complete, self-contained diagram blocks; including flow direction, node shapes, and CSS classes, ensuring every update remained coherent and render-ready.
    What follows is a chronological record of that evolution.

Initial example of agent diagram rendering, with few nodes.

The agent begins by establishing the problem space: identifying the user’s build intent and classifying fundamental capability types. At this stage, the system is only gathering anchors,  not yet interpreting complexity.

Second example of agent diagram rendering, with additional nodes and connections.

As the conversation progresses, the agent introduces support pathways and outcome checks. This phase reflects how the system begins to mediate decisions rather than simply record them.

Final example of agent diagram rendering, a full end-to-end flowchart with logical nodes connected and key shapes from legend properly used.

The model surfaces environment and data-handling choices as first-class decision points. At this stage, the agent is constructing a connected understanding of prerequisites, constraints, and required checkpoints.

Third example of agent diagram generation, with structure mapped out and core nodes expressed.

The complete flow reveals the full procedural map. This represents the agent’s stabilized interpretation of the entire build lifecycle.

06

Impact

Establishing a repeatable pattern for AI-driven process intelligence
  • The prototype demonstrated the workflow from live transcription to visual mapping to editable file exportin under two minutes, proving the feasibility of real-time “conversation-to-structure” automation. Its clarity and reliability attracted strong interest from executives and SMEs, who began treating it as a reference pattern for AI + Process design across the organization.

    The work also catalyzed new initiatives exploring broader “meeting-to-memory” intelligence, positioning this approach as a foundational direction for future enterprise collaboration tools.

  • 97-98% accuracyfrom translating natural language into transcript

94-96% accuracytranslating transcript into ideal diagram

2-4 secondnear real-time updates to the diagram in meeting

Under 2 minutegeneration of editable diagram asset export

07

Reflection

Agents are truly 90% UX and 10% UI
  • “The real value of AI isn’t in how much it automates, it’s in how faithfully it preserves human structure and meaning.”

    This project deepened my belief that probabilistic systems should amplify human sense-making rather than replace it. Designing for trust, reversibility, and continuity proved essential in turning generative output into organizational knowledge.

    NEXT STEPS
    / Expand integration with planning and workflow APIs.

    / Implement confidence visualization and micro-confirmations.
       
    / Evolve auto-layout logic and summarization for larger datasets.
       
    / Develop telemetry for continuous learning and performance feedback.