After running a search in Moments Lab—whether using keyword mode or semantic mode—your results appear in the results panel with flexible viewing options designed to help you find the right content quickly and efficiently.
This article explains how to switch between different view modes, toggle between moments and full media files, sort your results, search in multiple languages, and apply filters to refine what you see.
Switch Between Grid View and List View
Once your search results load, you can choose how you want to view them:
Grid View
Grid View displays results as thumbnails, making it easy to scan visually.
Thumbnail-first layout that prioritizes visual identification
Scrubbing capability: hover over any thumbnail to preview the shot or moment without opening it
Best for editors and producers who need to quickly assess visual content
List View
List View displays results with detailed metadata alongside smaller thumbnails.
Metadata-dense layout that shows textual descriptions, tags, dates, and other fields
Ideal for fast scanning and sorting based on metadata
Best for researchers, archivists, and anyone working with rights information, credits, or structured data
Top Tip: You can customize which metadata fields appear in List View by adjusting your user display preferences. This allows you to tailor the interface to your specific workflow.
Toggle Between Moments and Media Files
Moments Lab indexes content at two distinct levels, and you can switch between them depending on your needs:
Moments View
Displays individual AI-identified shots, sequences, and moments within longer assets
Ideal when you need a specific moment, scene, or visual element
Enables precise discovery without manually scrubbing through entire files
Media Files View
Displays full clips, episodes, programs, or long-form assets
Useful when you want to see complete files rather than individual moments
Best for understanding full context or working with entire assets
Use the toggle control above the results panel to switch between these two views.
Understanding Different Moment Types in Moments View
When searching in Moments View mode, your results can include several different types of moments, each generated or defined in different ways. Understanding these types helps you refine your search and find exactly what you need.
Sequence
A sequence is a timecode in/timecode out segment determined by MXT, Moments Lab's patented multimodal AI. It is created by combining what is sees as well as what it hears from any spoken words in the audio.
Describes what is happening in a scene
Example: "John Smith discussing climate policy in the conference room"
Ideal for finding specific scenes, actions, or visual contexts
Soundbite
A soundbite is a timecode in/timecode out segment identified by MXT as a significant part of an audio transcript.
Highlights key moments in speeches, interviews, debates, or commentary
Example: A politician's key statement during a press conference, or a standout answer in an interview
Ideal for editors looking for quotable moments or impactful audio
Transcript
Transcript moments appear when any word in your search query matches a word in the spoken transcript.
Enables word-level search across all spoken content
Useful for finding specific terms, names, or phrases mentioned in dialogue
Ideal for research, fact-checking, or locating exact quotes
Annotation
Annotations are used primarily in sports use cases and are generated when your search query matches words in your sports data feed.
Examples include: goal, tackle, try, save, touchdown, penalty, assist, and more depending on the sport
Enables precise discovery of specific plays, actions, or events within matches
Ideal for sports editors, highlight producers, and analysts building reels or reviewing gameplay
User Moment
A user moment is a manually defined moment, typically created by a media manager or media logger.
Highlights a specifically important part of a video as determined by your team
Useful for surfacing editorial picks, flagged content, or pre-logged highlights
Ideal when you want to prioritize human-curated moments over AI-generated ones
Custom Moment
Custom moments are customizable AI-generated clips driven by a system prompt and specific rules based on content type.
Configured to meet your organization's unique needs
Can be tailored to specific workflows, content types, or business logic
Ideal for organizations with specialized content or standardized editorial requirements
Why Moment Types Matter
By understanding which moment type you're viewing, you can:
Refine your search strategy (e.g., focus on soundbites for interviews, or annotations for sports content)
Prioritize certain types of results based on your workflow
Combine moment type filters with keyword or semantic search for surgical precision
Sorting Your Results
Above the results panel, you'll find options to control how your results are ordered:
Available Sort Options
Relevance (default): Results ranked by how closely they match your search query, powered by Moments Lab's AI-native search engine
Date — ascending: Oldest content appears first
Date — descending: Newest content appears first
Title — ascending: Media title beginning with A appears first
Title — descending: Media title beginning with Z appears first
Choose Relevance when you want the most contextually accurate matches. Choose Date sorting when timeline is important or you're researching content from a specific period.
Note: Sorting by Relevance, date and title ascending and descending is only available when searching using the keyword search mode
Using The Filters
You can use your filters to narrow down
If your organization uses custom metadata fields—such as rights status, location, event type, project name, label, or contributor—you can apply those as filters to narrow your results.
You can combine multiple search and filter tools to create highly targeted searches:
By layering these options together, you can eliminate irrelevant results and surface only the most relevant content—saving time and improving search precision.
Note: Filters in Moments Lab work dynamically based on your current results. Once you apply a filter, the other available filter options automatically update to show only criteria that exist within the displayed results.
This means:
You'll only see filter options that are relevant to the content currently shown
As you refine your search with additional filters, the available filter values adjust accordingly
You won't waste time selecting filters that would return zero results
The filtering experience becomes faster and more intuitive as you narrow your search
Best Practices
Start with a broad search, then refine: Review your initial results, then apply filters or change views to narrow down
Use Grid View when visual identification is key (e.g., selecting shots for an edit or presentation)
Use List View for metadata-driven tasks (e.g., rights research, logging verification, archive management)
Customize List View metadata fields in your display preferences to match your role and daily workflow needs
Switch to Moments View when you need shot-level or moment-level precision
Switch to Media Files View when you need full-file context or are downloading complete assets
Sort by Relevance when meaning and context matter most
Sort by Date when timeline or recency is the priority
Use multilingual search if your archive contains content in multiple languages or your team is distributed globally
Combine filters with keyword and semantic search to create powerful, surgical queries—especially valuable in large or complex archives
Troubleshooting & FAQs
Q: I don't see the metadata fields I need in List View. How do I change that?
A: You can customize which fields appear by going to your user display preferences and selecting the metadata you want to display.
Q: What's the difference between Moments View and Media Files View?
A: Moments View shows AI-identified shots or sequences within assets (moment-level). Media Files View shows entire files like full episodes or clips (file-level). Choose based on whether you need a specific moment or the whole asset.
Q: How does the Relevance sort option determine order?
A: Relevance is calculated by Moments Lab's AI-native search engine, which uses multimodal metadata—audio, speech, visual, and contextual signals—to rank results by how well they match your query.
Q: Can I search in one language and find content indexed in another?
A: Yes. The multilingual search feature intelligently adapts to the language you select, surfacing relevant content even if it was originally indexed in a different language.
Q: Why do some of my results show moments and others show full files?
A: Check which view toggle is active (Moments or Media Files). You may need to switch between them to see the type of content you're looking for.
Q: Can I save my preferred view and filter settings?
A: Display preferences for metadata fields are saved in your user settings. Filter combinations can be reapplied manually or through saved searches if your organization has configured them.
Related Articles
If you need help at any point, feel free to contact Moments Lab via the integrated chat in the platform or by emailing support@momentslab.com.











