You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This will download jar files in your gradle cache in a directory like `~/.gradle/caches/modules-2/files-2.1/com.llama.llamastack/`
@@ -37,11 +37,7 @@ For local inferencing, it is required to include the ExecuTorch library into you
37
37
38
38
Include the ExecuTorch library by:
39
39
1. Download the `download-prebuilt-et-lib.sh` script file from the [llama-stack-client-kotlin-client-local](https://github.com/meta-llama/llama-stack-client-kotlin/tree/latest-release/llama-stack-client-kotlin-client-local/download-prebuilt-et-lib.sh) directory to your local machine.
40
-
2. Move the script to the top level of your Android app where the app directory resides:
2. Move the script to the top level of your Android app where the `app` directory resides.
45
41
3. Run `sh download-prebuilt-et-lib.sh` to create an `app/libs` directory and download the `executorch.aar` in that path. This generates an ExecuTorch library for the XNNPACK delegate.
46
42
4. Add the `executorch.aar` dependency in your `build.gradle.kts` file:
47
43
```
@@ -52,6 +48,8 @@ dependencies {
52
48
}
53
49
```
54
50
51
+
See other dependencies for the local RAG in Android app [README](https://github.com/meta-llama/llama-stack-client-kotlin/tree/latest-release/examples/android_app#quick-start).
52
+
55
53
## Llama Stack APIs in Your Android App
56
54
Breaking down the demo app, this section will show the core pieces that are used to initialize and run inference with Llama Stack using the Kotlin library.
57
55
@@ -60,7 +58,7 @@ Start a Llama Stack server on localhost. Here is an example of how you can do th
0 commit comments