Skip to content

[RFC] Introduce Ollama Wrapper for easing local development with Symfony AI #635

@chr-hertel

Description

@chr-hertel

With Symfony AI there also comes the AI Platform Component, that has the goal to provide an abstraction layer to inference provider and different models. For example, switch from Llama running on Azure or Llama running with Ollama.

Ollama is a very popular and well adopted tool, which enables developers to run models locally with a streamlined API, and the AI Platform aims to support this - even if that support needs to be extended.

The idea of this would be to provide a wrapper with Symfony CLI, that eases installation and handling of Ollama while working on a Symfony project using AI - similar to the Composer or PHP wrapper.

This could look like:

symfony ollama run llama3.2

See https://github.com/ollama/ollama/blob/main/README.md#cli-reference

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      pFad - Phonifier reborn

      Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

      Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


      Alternative Proxies:

      Alternative Proxy

      pFad Proxy

      pFad v3 Proxy

      pFad v4 Proxy