Skip to main content

Discovery Agent - FAQs

A list of frequently asked questions regarding the Discovery Agent

Will Edwards avatar
Written by Will Edwards
Updated over a month ago

What is the Moments Lab Discovery Agent?

The Discovery Agent is your AI-powered research assistant for video. Instead of relying on keyword-based search, it understands natural language, context, and intent to surface the exact clips, quotes, or moments you need—down to timecodes. It’s like having your smartest researcher embedded in your media library, available 24/7, delivering answers in seconds instead of hours.


How is it different from current search workflows?

Traditional search depends on exact keywords, metadata, or the one person who knows the system. The Discovery Agent takes a completely different approach:

  • Understands what you mean, not just what you type

  • Finds results even with poor or missing metadata

  • Surfaces moments, not just files

  • Lets you refine results in conversation instead of running endless queries

It transforms hours of scrubbing into seconds of discovery.


Who benefits the most from this feature?

  • Digital & social teams → Quickly find highlights without waiting on archivists

  • Archivists & system experts → Reduce constant requests and focus on higher-value work

  • External partners (sponsors, broadcasters) → Self-serve access to clips they need without bottlenecks

    In short: every group in your workflow saves time, reduces friction, and produces better content faster.


Does it work only for short-form, social-style content?

Not at all. The Discovery Agent is just as powerful for long-form storytelling. Whether you’re cutting a recap, a documentary, or a feature promo, it uncovers supporting footage, forgotten archive material, and thematic moments to enrich and elevate your stories.


Does it require my teams to learn new workflows?

No training is needed. The Agent uses a simple chat-style interface—users just describe what they’re looking for in natural language. Even freelancers or new hires can be productive on day one. It’s as easy as asking a teammate, not learning a new tool.


Can it answer general knowledge questions?

Yes. Beyond searching your library, the Agent can also answer contextual questions (e.g., “Who won this race in 2018?”). This saves teams from switching between tools or tabs, keeping research and production in one place.


How do I access the Discovery Agent?

The Agent is built directly into the Moments Lab platform. Users can launch it through the Discovery Agent icon in their workspace and start searching right away.


Can I use the Agent to search within specific collections?

Not yet. Today, the Discovery Agent searches across all content a user has permission to access. Future updates will allow finer-grained targeting within specific collections.


What languages are supported?

The Discovery Agent currently supports English, French, Arabic, Dutch, and Spanish. Both the interface and responses are delivered in your chosen language, enabling global teams to collaborate more easily.


What roles have access to the Discovery Agent?

All roles within the Moments Lab platform can access and use the Agent.


Can the Agent read and understand documents or articles linked in a prompt?

Yes. If you include a link to an article or document, the Agent can read and incorporate that context into its response.


Can I upload images or videos in my prompt?

Not directly. The Agent can only analyze videos that have been processed by MXT-2 within the Moments Lab platform.


How long does it take the Agent to respond to a query?

On average, about 30 seconds. Short text-based tasks are faster, while complex requests—such as returning multiple diverse clips—take slightly longer.


Can I make follow-up prompts?

Yes. The Agent is conversational, so you can refine results, ask for similar content, or pivot to a new request without starting from scratch—just like an ongoing conversation with a researcher.


Does the Agent replace global search in the platform?

No. It complements global search. Power users can still run precise searches the old way, while the Agent makes discovery more intuitive for everyone else.


Can I provide feedback on Agent answers?

Yes. You can give a thumbs up or down. Feedback is currently used to monitor performance and identify pain points, and will eventually be incorporated into reinforcement learning to improve responses over time.


What are best practices for prompting the Discovery Agent?

The more context you provide, the better the results. For example:

  • Instead of: “Find interviews”

  • Try: “Find emotional player interviews after the 2022 final”

    Framing what you’re looking for and why helps the Agent plan and return highly relevant results.


Does the Agent return files that aren’t indexed by MXT?

Yes, but only at the file level. If a file hasn’t been indexed, the Agent can still surface it based on its title or basic metadata. To return moment-level clips, the file needs to be indexed by MXT.


Can the Discovery Agent be branded or renamed?

Not today. However, custom branding and naming are being considered for the future.


What customization options are available?

You can configure the Agent with a workspace description that provides context on:

  • The types of content in your library

  • The typical outputs your teams create

  • Date or content restrictions

This helps tailor responses for your organization and improves accuracy across all users.


Can individual users provide custom instructions?

Not yet, but it’s on the roadmap. Soon, users will be able to set personal preferences for how the Agent responds, making it even more like a personalized teammate.


Does the Discovery Agent use semantic search?

Yes. It uses a hybrid approach combining:

  • Lexical search → exact word matches

  • Text semantic search → intent-based matches (e.g., “dog” = “puppy”)

  • Visual semantic search → frame-level matches based on what’s actually in the footage

This blend ensures higher recall and precision than keyword-only search.


Does the Agent use embeddings?

Yes. Embeddings power semantic understanding by mapping words, phrases, and visuals into a shared vector space, enabling the Agent to find related concepts—not just exact terms.


What is “Reasoning” and how does the Agent use it?

Reasoning refers to how the Agent breaks down complex prompts into multi-step tasks—for example:

  1. Interpreting intent (“Find underdog stories”)

  2. Cross-referencing external knowledge if needed (e.g., event outcomes)

  3. Mapping that against your media library

The Discovery Agent uses OpenAI’s GPT-4.1 to perform these reasoning steps, simulating how a skilled researcher would approach the task.


What is MCP/A2A and does the Agent support it?

MCP (Multi-Context Protocol) allows an agent to use external tools and environments. A2A (Agent-to-Agent) allows multiple agents to collaborate.

The Discovery Agent is not yet fully A2A/MCP enabled. Early demonstrations are underway, but production-ready deployments are still in development.


Can I use the Discovery Agent inside my existing MAM?

Not yet. To use the Agent, content must be stored (at least as proxy files) within the Moments Lab platform.


Can I access the Discovery Agent via Adobe Premiere?

Not currently. However, integrations with editing tools are being explored for future releases.


Is the Discovery Agent available via API?

No, not today. API access is on the roadmap.


Related articles


If you have any further questions, don’t hesitate to reach out to us at support@momentslab.com — our team will be happy to help.

Did this answer your question?