{"id":70677,"date":"2026-03-24T03:02:17","date_gmt":"2026-03-24T03:02:17","guid":{"rendered":"https:\/\/meetcody.ai\/blog\/gemini-embedding-2-le-premier-modele-dintegration-multimodale-de-google\/"},"modified":"2026-03-26T18:08:01","modified_gmt":"2026-03-26T18:08:01","slug":"gemini-embedding-2-le-premier-modele-dintegration-multimodale-de-google","status":"publish","type":"post","link":"https:\/\/meetcody.ai\/fr\/blog\/gemini-embedding-2-le-premier-modele-dintegration-multimodale-de-google\/","title":{"rendered":"Gemini Embedding 2 : le premier mod\u00e8le d&#8217;int\u00e9gration multimodale de Google"},"content":{"rendered":"<p style=\"text-align: center;\"><em>Gemini Embedding 2 : Fonctionnalit\u00e9s, r\u00e9f\u00e9rences, prix et comment d\u00e9marrer<\/em><!-- notionvc: c383b1b6-2ff8-40bd-8227-0a70d481c796 --><\/p>\n<p>La semaine derni\u00e8re, Google a publi\u00e9  <a href=\"https:\/\/meetcody.ai\/blog\/google-introduces-the-multimodal-gemini-ultra-pro-nano-models\/\">G\u00e9meaux<\/a>  Embedding 2, le premier mod\u00e8le d&#8217;int\u00e9gration nativement multimodal construit sur l&#8217;architecture Gemini. Si vous travaillez avec des embeddings \u00e0 quelque titre que ce soit, cela m\u00e9rite votre attention. Il a le potentiel de perturber de mani\u00e8re significative les pipelines d&#8217;int\u00e9gration multimodale sur lesquels la plupart des \u00e9quipes s&#8217;appuient aujourd&#8217;hui.  <\/p>\n<p>Jusqu&#8217;\u00e0 pr\u00e9sent, les mod\u00e8les d&#8217;int\u00e9gration phares d&#8217;OpenAI, Cohere et Voyage \u00e9taient principalement bas\u00e9s sur le texte. Quelques options multimodales existaient &#8211; <a href=\"https:\/\/openai.com\/index\/clip\/\">CLIP<\/a> pour l&#8217;alignement image-texte, <a href=\"https:\/\/blog.voyageai.com\/2026\/01\/15\/voyage-multimodal-3-5\/\">Voyage Multimodal 3.5<\/a> pour les images et la vid\u00e9o &#8211; mais aucune ne couvrait l&#8217;ensemble des modalit\u00e9s dans un espace vectoriel unique et unifi\u00e9. L&#8217;audio devait g\u00e9n\u00e9ralement \u00eatre transcrit avant d&#8217;\u00eatre int\u00e9gr\u00e9. La vid\u00e9o n\u00e9cessitait l&#8217;extraction d&#8217;images combin\u00e9e \u00e0 l&#8217;int\u00e9gration de transcriptions s\u00e9par\u00e9es. Les images vivaient dans leur propre espace vectoriel.    <\/p>\n<p>Gemini Embedding 2 change cette \u00e9quation. Un mod\u00e8le, un appel API, un espace vectoriel. <\/p>\n<p>Voyons ce qu&#8217;il y a de nouveau.<\/p>\n<h2>Qu&#8217;est-ce que Gemini Embedding 2 ?<\/h2>\n<p><a href=\"https:\/\/blog.google\/innovation-and-ai\/models-and-research\/gemini-models\/gemini-embedding-2\/\">Gemini Embedding 2<\/a> (<code>gemini-embedding-2-preview<\/code>) est le premier <a href=\"https:\/\/meetcody.ai\/blog\/text-embedding-models\/\">mod\u00e8le d&#8217;int\u00e9gration<\/a> enti\u00e8rement multimodal de Google DeepMind. Il prend du texte, des images, des clips vid\u00e9o, des enregistrements audio et des documents PDF et les convertit tous en vecteurs qui vivent dans le m\u00eame espace s\u00e9mantique partag\u00e9. <\/p>\n<p>Contrairement aux approches multimodales ant\u00e9rieures telles que CLIP, qui associent un codeur de vision \u00e0 un codeur de texte et les alignent avec un apprentissage contrastif \u00e0 la fin, Gemini Embedding 2 est construit sur le mod\u00e8le de base Gemini lui-m\u00eame. Cela signifie qu&#8217;il h\u00e9rite d&#8217;une compr\u00e9hension multimodale profonde d\u00e8s le d\u00e9part. <\/p>\n<div id=\"attachment_70663\" style=\"width: 1034px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-70663\" class=\"wp-image-70663 size-full\" src=\"https:\/\/meetcody.ai\/wp-content\/uploads\/2026\/03\/embedding.png\" alt=\"Encastrements multimodaux\" width=\"1024\" height=\"587\" srcset=\"https:\/\/meetcody.ai\/wp-content\/uploads\/2026\/03\/embedding.png 1024w, https:\/\/meetcody.ai\/wp-content\/uploads\/2026\/03\/embedding-300x172.png 300w, https:\/\/meetcody.ai\/wp-content\/uploads\/2026\/03\/embedding-768x440.png 768w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><p id=\"caption-attachment-70663\" class=\"wp-caption-text\">Image g\u00e9n\u00e9r\u00e9e par Nano Banana<\/p><\/div>\n<p><strong>Exemple pratique :<\/strong> Imaginez que vous construisiez un syst\u00e8me de gestion de l&#8217;apprentissage (LMS) avec des tutoriels vid\u00e9o, des conf\u00e9rences audio et des guides \u00e9crits. Gr\u00e2ce \u00e0 Gemini Embedding 2, vous pouvez stocker les ench\u00e2ssements de tous ces contenus dans un espace vectoriel unique et construire un <a href=\"https:\/\/meetcody.ai\/blog\/rag-private-clouds\/\">chatbot bas\u00e9 sur RAG<\/a> qui r\u00e9cup\u00e8re les <a href=\"https:\/\/meetcody.ai\/blog\/how-does-cody-generate-responses-using-your-documents\/\">morceaux<\/a> pertinents des vid\u00e9os, des audios et des documents. Auparavant, cela n\u00e9cessitait un pipeline d&#8217;int\u00e9gration \u00e0 plusieurs niveaux &#8211; et m\u00eame dans ce cas, il ne capturait que les transcriptions, sans tenir compte du contexte visuel d&#8217;une vid\u00e9o ou du ton de la voix d&#8217;un orateur.  <\/p>\n<p>Le mod\u00e8le utilise l&#8217;<a href=\"https:\/\/arxiv.org\/abs\/2205.13147\">apprentissage par repr\u00e9sentation Matryoshka<\/a>, ce qui signifie que vous n&#8217;\u00eates pas oblig\u00e9 d&#8217;utiliser les 3072 dimensions si vous n&#8217;en avez pas besoin. Vous pouvez r\u00e9duire l&#8217;\u00e9chelle \u00e0 1536 ou 768 et obtenir des r\u00e9sultats exploitables. <\/p>\n<p><em>L&#8217;apprentissage par repr\u00e9sentation matryoshka (MRL) est une technique d&#8217;apprentissage des mod\u00e8les d&#8217;int\u00e9gration de sorte que les repr\u00e9sentations apprises soient utiles non seulement \u00e0 leur pleine dimensionnalit\u00e9, mais aussi \u00e0 diverses dimensions plus petites &#8211; imbriqu\u00e9es les unes dans les autres comme des poup\u00e9es russes matryoshka. Pendant l&#8217;apprentissage, la fonction de perte est calcul\u00e9e non seulement sur l&#8217;int\u00e9gration compl\u00e8te, mais aussi sur plusieurs pr\u00e9fixes du vecteur d&#8217;int\u00e9gration. Cela encourage le mod\u00e8le \u00e0 regrouper les informations les plus importantes dans les premi\u00e8res dimensions, chaque dimension suivante ajoutant des d\u00e9tails plus fins &#8211; une structure grossi\u00e8re \u00e0 fine.  <\/em><\/p>\n<h2>Modalit\u00e9s prises en charge et limites d&#8217;entr\u00e9e<\/h2>\n<p>Le mod\u00e8le accepte cinq types d&#8217;entr\u00e9es, toutes mapp\u00e9es dans le m\u00eame espace d&#8217;int\u00e9gration :<\/p>\n<table>\n<thead>\n<tr>\n<th>Modalit\u00e9<\/th>\n<th>Limite d&#8217;entr\u00e9e<\/th>\n<th>Formats<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>Texte<\/td>\n<td>Jusqu&#8217;\u00e0 8 192 jetons<\/td>\n<td>Texte brut<\/td>\n<\/tr>\n<tr>\n<td>Images<\/td>\n<td>Jusqu&#8217;\u00e0 6 images par demande<\/td>\n<td>PNG, JPEG<\/td>\n<\/tr>\n<tr>\n<td>Vid\u00e9o<\/td>\n<td>Jusqu&#8217;\u00e0 120 secondes<\/td>\n<td>MP4, MOV<\/td>\n<\/tr>\n<tr>\n<td>Audio<\/td>\n<td>Jusqu&#8217;\u00e0 80 secondes (natif, sans transcription)<\/td>\n<td>MP3, WAV<\/td>\n<\/tr>\n<tr>\n<td>PDFs<\/td>\n<td>Documents PDF directement incorpor\u00e9s<\/td>\n<td>Documents PDF<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h2>Comparaison avec les mod\u00e8les existants<\/h2>\n<p><strong>TLDR :<\/strong> Le nouveau mod\u00e8le Gemini Embedding 2 de Google surpasse ses concurrents (son pr\u00e9d\u00e9cesseur, Amazon Nova 2 et Voyage Multimodal 3.5) dans presque toutes les modalit\u00e9s : texte, image, vid\u00e9o et parole. C&#8217;est en recherche vid\u00e9o et en correspondance image-texte qu&#8217;il est le plus convaincant. Le seul point de r\u00e9f\u00e9rence o\u00f9 il ne gagne pas est la recherche de documents, o\u00f9 Voyage a une l\u00e9g\u00e8re longueur d&#8217;avance. La recherche de texte vocal est une cat\u00e9gorie que Gemini poss\u00e8de en propre, car aucun concurrent ne la prend en charge.   <\/p>\n<p>Google a publi\u00e9 des comparaisons avec ses propres mod\u00e8les, Amazon Nova 2 Multimodal Embeddings et Voyage Multimodal 3.5. Voici le tableau complet : <\/p>\n<h3>Texte-Texte<\/h3>\n<table>\n<thead>\n<tr>\n<th>M\u00e9trique<\/th>\n<th>Gemini Embedding 2<\/th>\n<th>gemini-embedding-001<\/th>\n<th>Amazon Nova 2<\/th>\n<th>Voyage Multimodal 3.5<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>MTEB Multilingue (t\u00e2che moyenne)<\/td>\n<td><strong>69.9<\/strong><\/td>\n<td>68.4<\/td>\n<td>63.8**<\/td>\n<td>58.5***<\/td>\n<\/tr>\n<tr>\n<td>Code MTEB (t\u00e2che moyenne)<\/td>\n<td><strong>84.0<\/strong><\/td>\n<td>76.0<\/td>\n<td>*<\/td>\n<td>*<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>Gemini Embedding 2 est en t\u00eate pour les textes multilingues avec une marge confortable et gagne 8 points par rapport \u00e0 son pr\u00e9d\u00e9cesseur pour la recherche de codes. Ni Amazon Nova 2 ni Voyage ne rapportent de scores de code. <\/p>\n<h3>Texte-Image<\/h3>\n<table>\n<thead>\n<tr>\n<th>M\u00e9trique<\/th>\n<th>Gemini Embedding 2<\/th>\n<th>multimodalembedding@001<\/th>\n<th>Amazon Nova 2<\/th>\n<th>Voyage Multimodal 3.5<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>TextCaps (rappel@1)<\/td>\n<td><strong>89.6<\/strong><\/td>\n<td>74.0<\/td>\n<td>76.0<\/td>\n<td>79.4<\/td>\n<\/tr>\n<tr>\n<td>Docci (rappel@1)<\/td>\n<td><strong>93.4<\/strong><\/td>\n<td>&#8211;<\/td>\n<td>84.0<\/td>\n<td>83.8<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>Une nette avance dans la recherche texte-image &#8211; plus de 9 points d&#8217;avance sur le concurrent le plus proche sur les deux crit\u00e8res de r\u00e9f\u00e9rence.<\/p>\n<h3>Image-Texte<\/h3>\n<table>\n<thead>\n<tr>\n<th>M\u00e9trique<\/th>\n<th>Gemini Embedding 2<\/th>\n<th>multimodalembedding@001<\/th>\n<th>Amazon Nova 2<\/th>\n<th>Voyage Multimodal 3.5<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>TextCaps (rappel@1)<\/td>\n<td><strong>97.4<\/strong><\/td>\n<td>88.1<\/td>\n<td>88.9<\/td>\n<td>88.6<\/td>\n<\/tr>\n<tr>\n<td>Docci (rappel@1)<\/td>\n<td><strong>91.3<\/strong><\/td>\n<td>&#8211;<\/td>\n<td>76.5<\/td>\n<td>77.4<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>La recherche d&#8217;images dans le texte pr\u00e9sente les \u00e9carts les plus importants, avec pr\u00e8s de 15 points d&#8217;avance sur Amazon Nova 2 sur Docci.<\/p>\n<h3>Document texte<\/h3>\n<table>\n<thead>\n<tr>\n<th>M\u00e9trique<\/th>\n<th>Gemini Embedding 2<\/th>\n<th>multimodalembedding@001<\/th>\n<th>Amazon Nova 2<\/th>\n<th>Voyage Multimodal 3.5<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>ViDoRe v2 (ndcg@10)<\/td>\n<td>64.9<\/td>\n<td>28.9<\/td>\n<td>60.6<\/td>\n<td><strong>65.5**<\/strong><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>Le seul crit\u00e8re o\u00f9 Voyage Multimodal 3.5 a une longueur d&#8217;avance (auto-\u00e9valuation). La recherche de documents est proche entre les mod\u00e8les les plus performants. <\/p>\n<h3>Texte-Vid\u00e9o<\/h3>\n<table>\n<thead>\n<tr>\n<th>M\u00e9trique<\/th>\n<th>Gemini Embedding 2<\/th>\n<th>multimodalembedding@001<\/th>\n<th>Amazon Nova 2<\/th>\n<th>Voyage Multimodal 3.5<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>Vatex (ndcg@10)<\/td>\n<td><strong>68.8<\/strong><\/td>\n<td>54.9<\/td>\n<td>60.3<\/td>\n<td>55.2<\/td>\n<\/tr>\n<tr>\n<td>MSR-VTT (ndcg@10)<\/td>\n<td><strong>68.0<\/strong><\/td>\n<td>57.9<\/td>\n<td>67.0<\/td>\n<td>63.0**<\/td>\n<\/tr>\n<tr>\n<td>Youcook2 (ndcg@10)<\/td>\n<td><strong>52.5<\/strong><\/td>\n<td>34.9<\/td>\n<td>34.7<\/td>\n<td>31.4**<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>C&#8217;est dans le domaine de la r\u00e9cup\u00e9ration de vid\u00e9os que Gemini Embedding 2 est le plus en avance &#8211; plus de 17 points au-dessus de Voyage sur Youcook2 et plus de 13 points sur Vatex.<\/p>\n<h3>Discours-Texte<\/h3>\n<table>\n<thead>\n<tr>\n<th>M\u00e9trique<\/th>\n<th>Embo\u00eetement Gemini 2<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>MSEB (mrr@10)<\/td>\n<td><strong>73.9<\/strong><\/td>\n<\/tr>\n<tr>\n<td>MSEB ASR**** (mrr@10)<\/td>\n<td><strong>70.4<\/strong><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>La r\u00e9cup\u00e9ration de texte parl\u00e9 n&#8217;est pas du tout contest\u00e9e &#8211; ni Amazon ni Voyage ne la prennent en charge. C&#8217;est une cat\u00e9gorie que Gemini Embedding 2 poss\u00e8de \u00e0 part enti\u00e8re. <\/p>\n<p><em>&#8211; score non disponible ** auto-d\u00e9clar\u00e9 *** voyage-3.5 **** Le mod\u00e8le ASR convertit les requ\u00eates audio en texte.<\/em><\/p>\n<h2>Tarification<\/h2>\n<p>Le mod\u00e8le est actuellement gratuit dans le cadre de la pr\u00e9visualisation publique. Une fois qu&#8217;il est payant, voici comment il se d\u00e9compose : <\/p>\n<table>\n<thead>\n<tr>\n<th><\/th>\n<th>Niveau gratuit<\/th>\n<th>Palier payant (par 1M de jetons)<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>Saisie de texte<\/td>\n<td>Gratuit<\/td>\n<td>$0.20<\/td>\n<\/tr>\n<tr>\n<td>Saisie d&#8217;images<\/td>\n<td>Gratuit<\/td>\n<td>0,45 $ (0,00012 $ par image)<\/td>\n<\/tr>\n<tr>\n<td>Entr\u00e9e audio<\/td>\n<td>Gratuit<\/td>\n<td>6,50 $ (0,00016 $ par seconde)<\/td>\n<\/tr>\n<tr>\n<td>Entr\u00e9e vid\u00e9o<\/td>\n<td>Gratuit<\/td>\n<td>12,00 $ (0,00079 $ par image)<\/td>\n<\/tr>\n<tr>\n<td>Utilis\u00e9 pour am\u00e9liorer les produits Google<\/td>\n<td>Oui<\/td>\n<td>Non<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h2><strong>Pour commencer<\/strong><\/h2>\n<p>Le mod\u00e8le est disponible d\u00e8s maintenant en avant-premi\u00e8re publique via l&#8217;API Gemini et Vertex AI sous l&#8217;identifiant de mod\u00e8le  <code>gemini-embedding-2-preview<\/code>. Il s&#8217;int\u00e8gre \u00e0 LangChain, LlamaIndex, Haystack, Weaviate, Qdrant, ChromaDB et Vector Search.<\/p>\n<pre><code class=\"language-jsx\">from google import genai\nfrom google.genai import types\n\n# For Vertex AI:\n# PROJECT_ID='&lt;add_here&gt;'\n# client = genai.Client(vertexai=True, project=PROJECT_ID, location='us-central1')\n\nclient = genai.Client()\n\nwith open(\"example.png\", \"rb\") as f:\n    image_bytes = f.read()\n\nwith open(\"sample.mp3\", \"rb\") as f:\n    audio_bytes = f.read()\n\n# Embed text, image, and audio \nresult = client.models.embed_content(\n    model=\"gemini-embedding-2-preview\",\n    contents=[\n        \"What is the meaning of life?\",\n        types.Part.from_bytes(\n            data=image_bytes,\n            mime_type=\"image\/png\",\n        ),\n        types.Part.from_bytes(\n            data=audio_bytes,\n            mime_type=\"audio\/mpeg\",\n        ),\n    ],\n)\n\nprint(result.embeddings)\n<\/code><\/pre>\n<h2>Essayez-le ici !<\/h2>\n<p>Nous avons cr\u00e9\u00e9 une <a href=\"https:\/\/gemini-2-trial.vercel.app\">application de<\/a> d\u00e9monstration qui vous permet de tester les performances de recherche multimodale de gemini-embedding-2.<\/p>\n<p>Vous pouvez obtenir la cl\u00e9 API en vous connectant \u00e0 <a href=\"http:\/\/aistudio.google.com\">aistudio.google.com.<\/a><\/p>\n<h2>Limites \u00e0 surveiller<\/h2>\n<ul>\n<li>Le mod\u00e8le est encore en avant-premi\u00e8re publique (la mention &#8220;avant-premi\u00e8re&#8221; signifie que les prix et le comportement peuvent \u00eatre modifi\u00e9s avant l&#8217;AG).<\/li>\n<li>L&#8217;entr\u00e9e vid\u00e9o est limit\u00e9e \u00e0 120 secondes et l&#8217;entr\u00e9e audio \u00e0 80 secondes.<\/li>\n<li>Les performances dans des domaines de niche comme l&#8217;assurance qualit\u00e9 financi\u00e8re sont plus faibles ; \u00e9valuez-les par rapport \u00e0 vos donn\u00e9es sp\u00e9cifiques avant de vous engager.<\/li>\n<li>Pour les pipelines purement textuels sans plans multimodaux, le surco\u00fbt par rapport aux mod\u00e8les textuels uniquement peut ne pas \u00eatre justifi\u00e9.<\/li>\n<\/ul>\n<h2>Le bilan<\/h2>\n<p>Gemini Embedding 2 n&#8217;est pas seulement une am\u00e9lioration incr\u00e9mentale, c&#8217;est un changement de cat\u00e9gorie. Pour les \u00e9quipes qui construisent des syst\u00e8mes RAG multimodaux, des recherches s\u00e9mantiques sur diff\u00e9rents types de m\u00e9dias ou des bases de connaissances unifi\u00e9es, il r\u00e9sume en un seul appel d&#8217;API ce qui \u00e9tait auparavant un probl\u00e8me multi-mod\u00e8le et multi-pipeline. Si vos donn\u00e9es ne se limitent pas \u00e0 du texte, c&#8217;est le mod\u00e8le \u00e0 \u00e9valuer en priorit\u00e9.  <\/p>\n<p>Construire un RAG multimodal ne devrait pas signifier assembler des mod\u00e8les d&#8217;int\u00e9gration, des bases de donn\u00e9es vectorielles et une logique d&#8217;extraction \u00e0 partir de z\u00e9ro. Si vous souhaitez une solution <a href=\"https:\/\/meetcody.ai\/blog\/rag-as-a-service-unlock-generative-ai-for-your-business\/\">RAG-as-a-Service<\/a> qui g\u00e8re le pipeline d&#8217;int\u00e9gration pour vous, <a href=\"https:\/\/getcody.ai\/\">inscrivez-vous<\/a> \u00e0 l&#8217;essai gratuit chez Cody et commencez \u00e0 construire d\u00e8s aujourd&#8217;hui. <\/p>\n<p><!-- notionvc: 1819203a-dd06-4804-9886-3355db49e8de --><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Gemini Embedding 2 : Fonctionnalit\u00e9s, r\u00e9f\u00e9rences, prix et comment d\u00e9marrer La semaine derni\u00e8re, Google a publi\u00e9 G\u00e9meaux Embedding 2, le premier mod\u00e8le d&#8217;int\u00e9gration nativement multimodal construit sur l&#8217;architecture Gemini. Si vous travaillez avec des embeddings \u00e0 quelque titre que ce soit, cela m\u00e9rite votre attention. Il a le potentiel de perturber de mani\u00e8re significative les<a class=\"excerpt-read-more\" href=\"https:\/\/meetcody.ai\/fr\/blog\/gemini-embedding-2-le-premier-modele-dintegration-multimodale-de-google\/\" title=\"ReadGemini Embedding 2 : le premier mod\u00e8le d&#8217;int\u00e9gration multimodale de Google\">&#8230; Read more &raquo;<\/a><\/p>\n","protected":false},"author":2,"featured_media":70656,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"inline_featured_image":false,"footnotes":""},"categories":[29],"tags":[],"class_list":["post-70677","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-non-classifiee"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v21.8 (Yoast SEO v24.2) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Gemini Embedding 2 : le premier mod\u00e8le d&#039;int\u00e9gration multimodale de Google<\/title>\n<meta name=\"description\" content=\"Gemini Embedding 2 de Google permet d&#039;int\u00e9grer du texte, des images, des vid\u00e9os, de l&#039;audio et des PDF dans un espace vectoriel. Nous analysons les points de r\u00e9f\u00e9rence, les prix et ce que cela signifie pour votre pipeline RAG.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/meetcody.ai\/fr\/blog\/gemini-embedding-2-le-premier-modele-dintegration-multimodale-de-google\/\" \/>\n<meta property=\"og:locale\" content=\"fr_FR\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Gemini Embedding 2 : le premier mod\u00e8le d&#039;int\u00e9gration multimodale de Google\" \/>\n<meta property=\"og:description\" content=\"Gemini Embedding 2 de Google permet d&#039;int\u00e9grer du texte, des images, des vid\u00e9os, de l&#039;audio et des PDF dans un espace vectoriel. Nous analysons les points de r\u00e9f\u00e9rence, les prix et ce que cela signifie pour votre pipeline RAG.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/meetcody.ai\/fr\/blog\/gemini-embedding-2-le-premier-modele-dintegration-multimodale-de-google\/\" \/>\n<meta property=\"og:site_name\" content=\"Cody - The AI Trained on Your Business\" \/>\n<meta property=\"article:published_time\" content=\"2026-03-24T03:02:17+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2026-03-26T18:08:01+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/meetcody.ai\/wp-content\/uploads\/2026\/03\/embedding-cover-scaled.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"2560\" \/>\n\t<meta property=\"og:image:height\" content=\"1440\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Om Kamath\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@meetcodyai\" \/>\n<meta name=\"twitter:site\" content=\"@meetcodyai\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Om Kamath\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"8 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/meetcody.ai\/fr\/blog\/gemini-embedding-2-le-premier-modele-dintegration-multimodale-de-google\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/meetcody.ai\/fr\/blog\/gemini-embedding-2-le-premier-modele-dintegration-multimodale-de-google\/\"},\"author\":{\"name\":\"Om Kamath\",\"@id\":\"https:\/\/meetcody.ai\/#\/schema\/person\/cde65ec55b79cd833a9777d0a62e83c8\"},\"headline\":\"Gemini Embedding 2 : le premier mod\u00e8le d&#8217;int\u00e9gration multimodale de Google\",\"datePublished\":\"2026-03-24T03:02:17+00:00\",\"dateModified\":\"2026-03-26T18:08:01+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/meetcody.ai\/fr\/blog\/gemini-embedding-2-le-premier-modele-dintegration-multimodale-de-google\/\"},\"wordCount\":1484,\"publisher\":{\"@id\":\"https:\/\/meetcody.ai\/#organization\"},\"image\":{\"@id\":\"https:\/\/meetcody.ai\/fr\/blog\/gemini-embedding-2-le-premier-modele-dintegration-multimodale-de-google\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/meetcody.ai\/wp-content\/uploads\/2026\/03\/embedding-cover-scaled.jpg\",\"articleSection\":[\"Non classifi\u00e9(e)\"],\"inLanguage\":\"fr-FR\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/meetcody.ai\/fr\/blog\/gemini-embedding-2-le-premier-modele-dintegration-multimodale-de-google\/\",\"url\":\"https:\/\/meetcody.ai\/fr\/blog\/gemini-embedding-2-le-premier-modele-dintegration-multimodale-de-google\/\",\"name\":\"Gemini Embedding 2 : le premier mod\u00e8le d'int\u00e9gration multimodale de Google\",\"isPartOf\":{\"@id\":\"https:\/\/meetcody.ai\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/meetcody.ai\/fr\/blog\/gemini-embedding-2-le-premier-modele-dintegration-multimodale-de-google\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/meetcody.ai\/fr\/blog\/gemini-embedding-2-le-premier-modele-dintegration-multimodale-de-google\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/meetcody.ai\/wp-content\/uploads\/2026\/03\/embedding-cover-scaled.jpg\",\"datePublished\":\"2026-03-24T03:02:17+00:00\",\"dateModified\":\"2026-03-26T18:08:01+00:00\",\"description\":\"Gemini Embedding 2 de Google permet d'int\u00e9grer du texte, des images, des vid\u00e9os, de l'audio et des PDF dans un espace vectoriel. Nous analysons les points de r\u00e9f\u00e9rence, les prix et ce que cela signifie pour votre pipeline RAG.\",\"breadcrumb\":{\"@id\":\"https:\/\/meetcody.ai\/fr\/blog\/gemini-embedding-2-le-premier-modele-dintegration-multimodale-de-google\/#breadcrumb\"},\"inLanguage\":\"fr-FR\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/meetcody.ai\/fr\/blog\/gemini-embedding-2-le-premier-modele-dintegration-multimodale-de-google\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\/\/meetcody.ai\/fr\/blog\/gemini-embedding-2-le-premier-modele-dintegration-multimodale-de-google\/#primaryimage\",\"url\":\"https:\/\/meetcody.ai\/wp-content\/uploads\/2026\/03\/embedding-cover-scaled.jpg\",\"contentUrl\":\"https:\/\/meetcody.ai\/wp-content\/uploads\/2026\/03\/embedding-cover-scaled.jpg\",\"width\":2560,\"height\":1440},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/meetcody.ai\/fr\/blog\/gemini-embedding-2-le-premier-modele-dintegration-multimodale-de-google\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/meetcody.ai\/fr\/home-v2\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Gemini Embedding 2 : le premier mod\u00e8le d&#8217;int\u00e9gration multimodale de Google\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/meetcody.ai\/#website\",\"url\":\"https:\/\/meetcody.ai\/\",\"name\":\"Cody AI - The AI Trained on Your Business\",\"description\":\"AI Powered Knowledge Base for Employees\",\"publisher\":{\"@id\":\"https:\/\/meetcody.ai\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/meetcody.ai\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"fr-FR\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/meetcody.ai\/#organization\",\"name\":\"Cody AI - The AI Trained on Your Business\",\"url\":\"https:\/\/meetcody.ai\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\/\/meetcody.ai\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/meetcody.ai\/wp-content\/uploads\/2023\/05\/logo-codyai.svg\",\"contentUrl\":\"https:\/\/meetcody.ai\/wp-content\/uploads\/2023\/05\/logo-codyai.svg\",\"width\":\"1024\",\"height\":\"1024\",\"caption\":\"Cody AI - The AI Trained on Your Business\"},\"image\":{\"@id\":\"https:\/\/meetcody.ai\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/x.com\/meetcodyai\",\"https:\/\/discord.com\/invite\/jXEVDcFxqs\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/meetcody.ai\/#\/schema\/person\/cde65ec55b79cd833a9777d0a62e83c8\",\"name\":\"Om Kamath\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\/\/meetcody.ai\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/3903c678cd7f6c8df0a843ae177998f5d413954afa3062f984a030a889a97849?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/3903c678cd7f6c8df0a843ae177998f5d413954afa3062f984a030a889a97849?s=96&d=mm&r=g\",\"caption\":\"Om Kamath\"},\"description\":\"Om Kamath\",\"sameAs\":[\"http:\/\/meetcody.ai\"],\"url\":\"https:\/\/meetcody.ai\/fr\/blog\/author\/omkamath\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Gemini Embedding 2 : le premier mod\u00e8le d'int\u00e9gration multimodale de Google","description":"Gemini Embedding 2 de Google permet d'int\u00e9grer du texte, des images, des vid\u00e9os, de l'audio et des PDF dans un espace vectoriel. Nous analysons les points de r\u00e9f\u00e9rence, les prix et ce que cela signifie pour votre pipeline RAG.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/meetcody.ai\/fr\/blog\/gemini-embedding-2-le-premier-modele-dintegration-multimodale-de-google\/","og_locale":"fr_FR","og_type":"article","og_title":"Gemini Embedding 2 : le premier mod\u00e8le d'int\u00e9gration multimodale de Google","og_description":"Gemini Embedding 2 de Google permet d'int\u00e9grer du texte, des images, des vid\u00e9os, de l'audio et des PDF dans un espace vectoriel. Nous analysons les points de r\u00e9f\u00e9rence, les prix et ce que cela signifie pour votre pipeline RAG.","og_url":"https:\/\/meetcody.ai\/fr\/blog\/gemini-embedding-2-le-premier-modele-dintegration-multimodale-de-google\/","og_site_name":"Cody - The AI Trained on Your Business","article_published_time":"2026-03-24T03:02:17+00:00","article_modified_time":"2026-03-26T18:08:01+00:00","og_image":[{"width":2560,"height":1440,"url":"https:\/\/meetcody.ai\/wp-content\/uploads\/2026\/03\/embedding-cover-scaled.jpg","type":"image\/jpeg"}],"author":"Om Kamath","twitter_card":"summary_large_image","twitter_creator":"@meetcodyai","twitter_site":"@meetcodyai","twitter_misc":{"Written by":"Om Kamath","Est. reading time":"8 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/meetcody.ai\/fr\/blog\/gemini-embedding-2-le-premier-modele-dintegration-multimodale-de-google\/#article","isPartOf":{"@id":"https:\/\/meetcody.ai\/fr\/blog\/gemini-embedding-2-le-premier-modele-dintegration-multimodale-de-google\/"},"author":{"name":"Om Kamath","@id":"https:\/\/meetcody.ai\/#\/schema\/person\/cde65ec55b79cd833a9777d0a62e83c8"},"headline":"Gemini Embedding 2 : le premier mod\u00e8le d&#8217;int\u00e9gration multimodale de Google","datePublished":"2026-03-24T03:02:17+00:00","dateModified":"2026-03-26T18:08:01+00:00","mainEntityOfPage":{"@id":"https:\/\/meetcody.ai\/fr\/blog\/gemini-embedding-2-le-premier-modele-dintegration-multimodale-de-google\/"},"wordCount":1484,"publisher":{"@id":"https:\/\/meetcody.ai\/#organization"},"image":{"@id":"https:\/\/meetcody.ai\/fr\/blog\/gemini-embedding-2-le-premier-modele-dintegration-multimodale-de-google\/#primaryimage"},"thumbnailUrl":"https:\/\/meetcody.ai\/wp-content\/uploads\/2026\/03\/embedding-cover-scaled.jpg","articleSection":["Non classifi\u00e9(e)"],"inLanguage":"fr-FR"},{"@type":"WebPage","@id":"https:\/\/meetcody.ai\/fr\/blog\/gemini-embedding-2-le-premier-modele-dintegration-multimodale-de-google\/","url":"https:\/\/meetcody.ai\/fr\/blog\/gemini-embedding-2-le-premier-modele-dintegration-multimodale-de-google\/","name":"Gemini Embedding 2 : le premier mod\u00e8le d'int\u00e9gration multimodale de Google","isPartOf":{"@id":"https:\/\/meetcody.ai\/#website"},"primaryImageOfPage":{"@id":"https:\/\/meetcody.ai\/fr\/blog\/gemini-embedding-2-le-premier-modele-dintegration-multimodale-de-google\/#primaryimage"},"image":{"@id":"https:\/\/meetcody.ai\/fr\/blog\/gemini-embedding-2-le-premier-modele-dintegration-multimodale-de-google\/#primaryimage"},"thumbnailUrl":"https:\/\/meetcody.ai\/wp-content\/uploads\/2026\/03\/embedding-cover-scaled.jpg","datePublished":"2026-03-24T03:02:17+00:00","dateModified":"2026-03-26T18:08:01+00:00","description":"Gemini Embedding 2 de Google permet d'int\u00e9grer du texte, des images, des vid\u00e9os, de l'audio et des PDF dans un espace vectoriel. Nous analysons les points de r\u00e9f\u00e9rence, les prix et ce que cela signifie pour votre pipeline RAG.","breadcrumb":{"@id":"https:\/\/meetcody.ai\/fr\/blog\/gemini-embedding-2-le-premier-modele-dintegration-multimodale-de-google\/#breadcrumb"},"inLanguage":"fr-FR","potentialAction":[{"@type":"ReadAction","target":["https:\/\/meetcody.ai\/fr\/blog\/gemini-embedding-2-le-premier-modele-dintegration-multimodale-de-google\/"]}]},{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/meetcody.ai\/fr\/blog\/gemini-embedding-2-le-premier-modele-dintegration-multimodale-de-google\/#primaryimage","url":"https:\/\/meetcody.ai\/wp-content\/uploads\/2026\/03\/embedding-cover-scaled.jpg","contentUrl":"https:\/\/meetcody.ai\/wp-content\/uploads\/2026\/03\/embedding-cover-scaled.jpg","width":2560,"height":1440},{"@type":"BreadcrumbList","@id":"https:\/\/meetcody.ai\/fr\/blog\/gemini-embedding-2-le-premier-modele-dintegration-multimodale-de-google\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/meetcody.ai\/fr\/home-v2\/"},{"@type":"ListItem","position":2,"name":"Gemini Embedding 2 : le premier mod\u00e8le d&#8217;int\u00e9gration multimodale de Google"}]},{"@type":"WebSite","@id":"https:\/\/meetcody.ai\/#website","url":"https:\/\/meetcody.ai\/","name":"Cody AI - The AI Trained on Your Business","description":"AI Powered Knowledge Base for Employees","publisher":{"@id":"https:\/\/meetcody.ai\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/meetcody.ai\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"fr-FR"},{"@type":"Organization","@id":"https:\/\/meetcody.ai\/#organization","name":"Cody AI - The AI Trained on Your Business","url":"https:\/\/meetcody.ai\/","logo":{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/meetcody.ai\/#\/schema\/logo\/image\/","url":"https:\/\/meetcody.ai\/wp-content\/uploads\/2023\/05\/logo-codyai.svg","contentUrl":"https:\/\/meetcody.ai\/wp-content\/uploads\/2023\/05\/logo-codyai.svg","width":"1024","height":"1024","caption":"Cody AI - The AI Trained on Your Business"},"image":{"@id":"https:\/\/meetcody.ai\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/meetcodyai","https:\/\/discord.com\/invite\/jXEVDcFxqs"]},{"@type":"Person","@id":"https:\/\/meetcody.ai\/#\/schema\/person\/cde65ec55b79cd833a9777d0a62e83c8","name":"Om Kamath","image":{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/meetcody.ai\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/3903c678cd7f6c8df0a843ae177998f5d413954afa3062f984a030a889a97849?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/3903c678cd7f6c8df0a843ae177998f5d413954afa3062f984a030a889a97849?s=96&d=mm&r=g","caption":"Om Kamath"},"description":"Om Kamath","sameAs":["http:\/\/meetcody.ai"],"url":"https:\/\/meetcody.ai\/fr\/blog\/author\/omkamath\/"}]}},"_links":{"self":[{"href":"https:\/\/meetcody.ai\/fr\/wp-json\/wp\/v2\/posts\/70677","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/meetcody.ai\/fr\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/meetcody.ai\/fr\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/meetcody.ai\/fr\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/meetcody.ai\/fr\/wp-json\/wp\/v2\/comments?post=70677"}],"version-history":[{"count":2,"href":"https:\/\/meetcody.ai\/fr\/wp-json\/wp\/v2\/posts\/70677\/revisions"}],"predecessor-version":[{"id":70707,"href":"https:\/\/meetcody.ai\/fr\/wp-json\/wp\/v2\/posts\/70677\/revisions\/70707"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/meetcody.ai\/fr\/wp-json\/wp\/v2\/media\/70656"}],"wp:attachment":[{"href":"https:\/\/meetcody.ai\/fr\/wp-json\/wp\/v2\/media?parent=70677"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/meetcody.ai\/fr\/wp-json\/wp\/v2\/categories?post=70677"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/meetcody.ai\/fr\/wp-json\/wp\/v2\/tags?post=70677"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}