From bab9c54c9063f79b541550708b0765d6e67f2fa5 Mon Sep 17 00:00:00 2001 From: Lee Stott Date: Mon, 11 Nov 2024 17:43:26 +0000 Subject: [PATCH] Update --- md/06.E2ESamples/E2E_Phi-3-Embedding_Images_with_CLIPVision.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/md/06.E2ESamples/E2E_Phi-3-Embedding_Images_with_CLIPVision.md b/md/06.E2ESamples/E2E_Phi-3-Embedding_Images_with_CLIPVision.md index b44d63f..da23e1d 100644 --- a/md/06.E2ESamples/E2E_Phi-3-Embedding_Images_with_CLIPVision.md +++ b/md/06.E2ESamples/E2E_Phi-3-Embedding_Images_with_CLIPVision.md @@ -360,7 +360,7 @@ This approach allows you to leverage powerful pre-trained models and adapt them ## Integrating the Phi Family of Models -Integrating the Phi-3 model with the provided code example involving CLIP can indeed be challenging as Dongdong said, especially when considering different vision encoders. +Integrating the Phi-3 model with the provided code example involving CLIP can indeed be challenging, especially when considering different vision encoders. Here's a brief overview of how you might approach this: