Wednesday, 14 May 2025
  • My Feed
  • My Interests
  • My Saves
  • History
  • Blog
Subscribe
Capernaum
  • Finance
    • Cryptocurrency
    • Stock Market
    • Real Estate
  • Lifestyle
    • Travel
    • Fashion
    • Cook
  • Technology
    • AI
    • Data Science
    • Machine Learning
  • Health
    HealthShow More
    Foods That Disrupt Our Microbiome
    Foods That Disrupt Our Microbiome

    Eating a diet filled with animal products can disrupt our microbiome faster…

    By capernaum
    Skincare as You Age Infographic
    Skincare as You Age Infographic

    When I dove into the scientific research for my book How Not…

    By capernaum
    Treating Fatty Liver Disease with Diet 
    Treating Fatty Liver Disease with Diet 

    What are the three sources of liver fat in fatty liver disease,…

    By capernaum
    Bird Flu: Emergence, Dangers, and Preventive Measures

    In the United States in January 2025 alone, approximately 20 million commercially-raised…

    By capernaum
    Inhospitable Hospital Food 
    Inhospitable Hospital Food 

    What do hospitals have to say for themselves about serving meals that…

    By capernaum
  • Sport
  • 🔥
  • Cryptocurrency
  • Data Science
  • Travel
  • Real Estate
  • AI
  • Technology
  • Machine Learning
  • Stock Market
  • Finance
  • Fashion
Font ResizerAa
CapernaumCapernaum
  • My Saves
  • My Interests
  • My Feed
  • History
  • Travel
  • Health
  • Technology
Search
  • Pages
    • Home
    • Blog Index
    • Contact Us
    • Search Page
    • 404 Page
  • Personalized
    • My Feed
    • My Saves
    • My Interests
    • History
  • Categories
    • Technology
    • Travel
    • Health
Have an existing account? Sign In
Follow US
© 2022 Foxiz News Network. Ruby Design Company. All Rights Reserved.
Home » Blog » This AI Paper from Salesforce Introduces VLM2VEC and MMEB: A Contrastive Framework and Benchmark for Universal Multimodal Embeddings
AIMachine LearningTechnology

This AI Paper from Salesforce Introduces VLM2VEC and MMEB: A Contrastive Framework and Benchmark for Universal Multimodal Embeddings

capernaum
Last updated: 2025-04-11 20:56
capernaum
Share
This AI Paper from Salesforce Introduces VLM2VEC and MMEB: A Contrastive Framework and Benchmark for Universal Multimodal Embeddings
SHARE

Multimodal embeddings combine visual and textual data into a single representational space, enabling systems to understand and relate images and language meaningfully. These embeddings support various tasks, including visual question answering, retrieval, classification, and grounding. The technology is especially important for AI models that interpret real-world content through visual and linguistic lenses, such as document analysis, digital assistants, or visual search engines.

A pressing challenge has been the inability of current models to generalize across diverse tasks and modalities effectively. Most models are trained for highly specific tasks or underperform when applied to unfamiliar datasets. Furthermore, without a broad and unified benchmark, evaluating performance across multimodal tasks becomes inconsistent and fragmented. This limits the models’ capability to handle the variety of functions required in realistic, cross-domain applications, especially when new data distributions are introduced.

Several tools, such as CLIP, BLIP, and SigLIP, have been proposed for generating visual-textual embeddings. These models typically use separate encoders for images and text, merging their outputs through simple operations like score-level fusion. While these approaches offer baseline utility, they suffer from limited cross-modal reasoning and generalization ability. Their performance in zero-shot conditions tends to decline due to shallow fusion strategies and the lack of task-specific instruction handling during training.

In a collaboration between researchers from Salesforce Research and the University of Waterloo, a new model called VLM2VEC was introduced alongside a comprehensive benchmark named MMEB. MMEB comprises 36 datasets across four major tasks: classification, visual question answering, retrieval, and visual grounding. It divides datasets into 20 used for training and 16 for evaluation, including out-of-distribution tasks. The VLM2VEC framework is designed to convert any vision-language model into an embedding model using contrastive training. It allows it to handle any input combination of text and images while following task instructions.

To build VLM2VEC, the research team used backbone models such as Phi-3.5-V and LLaVA-1.6. The method begins by constructing task-specific instruction-based queries and targets, processed through a vision-language model to generate embeddings. Contrastive training is employed using the InfoNCE loss function with cosine similarity, aligning embeddings by maximizing the similarity between matching query-target pairs while minimizing it for mismatches. To support large batch sizes, critical for training with diverse negatives, the researchers used GradCache, which splits batches into memory-manageable sub-batches and accumulates gradients. This process ensures efficient training even with the high memory demands of multimodal inputs. Task-specific instructions are embedded within the training pipeline to help the model adapt its encoding to the nature of the task, such as grounding or retrieval, further boosting its generalization capabilities.

Performance results demonstrate the advantage of the proposed method. The best-performing version of VLM2VEC used LLaVA-1.6 as its backbone, applied LoRA tuning, and processed images at 1344 × 1344 resolution. This configuration achieved a Precision@1 score of 62.9% across all 36 MMEB datasets. In zero-shot tests on the 16 out-of-distribution datasets, it maintained a strong 57.1% score. Compared to the best-performing baseline model without fine-tuning, which scored 44.7%, VLM2VEC showed an 18.2-point improvement. Compared to the top fine-tuned baseline at 47.2%, the improvement was 15.7 points. Across all task categories—classification, VQA, retrieval, and grounding—the model consistently scored above 50%, a level of performance not matched by any baseline. The results also indicate that LoRA-tuned variants outperformed those trained with full fine-tuning, showing that parameter-efficient training strategies can deliver higher accuracy.

The research clearly outlines a solution to the problem of task-specific multimodal embedding tools that lack generalization. By combining a well-structured training framework and a robust benchmark, the study demonstrates a universal embedding model that handles varied tasks effectively using contrastive training and instruction-following. This development marks a meaningful step forward in scalable, adaptable multimodal AI.


Check out Paper and Project. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 85k+ ML SubReddit.

The post This AI Paper from Salesforce Introduces VLM2VEC and MMEB: A Contrastive Framework and Benchmark for Universal Multimodal Embeddings appeared first on MarkTechPost.

Share This Article
Twitter Email Copy Link Print
Previous Article Columbus housing market softens as new listings and inventory spike Columbus housing market softens as new listings and inventory spike
Next Article Ohio still has a sellers’ market, but it isn’t what it once was Ohio still has a sellers’ market, but it isn’t what it once was
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Your Trusted Source for Accurate and Timely Updates!

Our commitment to accuracy, impartiality, and delivering breaking news as it happens has earned us the trust of a vast audience. Using RSS feeds, we aggregate news from trusted sources to ensure real-time updates on the latest events and trends. Stay ahead with timely, curated information designed to keep you informed and engaged.
TwitterFollow
TelegramFollow
LinkedInFollow
- Advertisement -
Ad imageAd image

You Might Also Like

Linux Foundation quietly became open source’s sprawling kingmaker
Data Science

Linux Foundation quietly became open source’s sprawling kingmaker

By capernaum
The “know-it-all” AI and the open source alternative
AIData Science

The “know-it-all” AI and the open source alternative

By capernaum
A Step-by-Step Guide to Build a Fast Semantic Search and RAG QA Engine on Web-Scraped Data Using Together AI Embeddings, FAISS Retrieval, and LangChain
AI

A Step-by-Step Guide to Build a Fast Semantic Search and RAG QA Engine on Web-Scraped Data Using Together AI Embeddings, FAISS Retrieval, and LangChain

By capernaum
Agent-Based Debugging Gets a Cost-Effective Alternative: Salesforce AI Presents SWERank for Accurate and Scalable Software Issue Localization
AI

Agent-Based Debugging Gets a Cost-Effective Alternative: Salesforce AI Presents SWERank for Accurate and Scalable Software Issue Localization

By capernaum
Capernaum
Facebook Twitter Youtube Rss Medium

Capernaum :  Your instant connection to breaking news & stories . Stay informed with real-time coverage across  AI ,Data Science , Finance, Fashion , Travel, Health. Your trusted source for 24/7 insights and updates.

© Capernaum 2024. All Rights Reserved.

CapernaumCapernaum
Welcome Back!

Sign in to your account

Lost your password?