From 81f235d278a4a5c12757b2680a87d040de7a2c00 Mon Sep 17 00:00:00 2001 From: Kevin Jung Date: Thu, 9 Nov 2023 00:44:28 +0000 Subject: [PATCH] Fixed llava server doc arguments --- docs/server.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/server.md b/docs/server.md index 030c591bf..6a6988b69 100644 --- a/docs/server.md +++ b/docs/server.md @@ -61,7 +61,7 @@ You'll first need to download one of the available multi-modal models in GGUF fo Then when you run the server you'll need to also specify the path to the clip model used for image embedding and the `llava-1-5` chat_format ```bash -python3 -m llama_cpp.server --model --clip-model-path --chat-format llava-1-5 +python3 -m llama_cpp.server --model --clip_model_path --chat_format llava-1-5 ``` Then you can just use the OpenAI API as normal @@ -88,4 +88,4 @@ response = client.chat.completions.create( ], ) print(response) -``` \ No newline at end of file +``` pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy