Skip to content

feat: Adds Oracle OCI Tracer #497

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 11 commits into from
Aug 6, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
268 changes: 268 additions & 0 deletions examples/tracing/oci/oci_genai_tracing.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,268 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"# Oracle OCI Generative AI Tracing with Openlayer\n",
"\n",
"This notebook demonstrates how to use Openlayer tracing with Oracle Cloud Infrastructure (OCI) Generative AI service.\n",
"\n",
"## Setup\n",
"\n",
"Before running this notebook, ensure you have:\n",
"1. An OCI account with access to Generative AI service\n",
"2. OCI CLI configured or OCI config file set up\n",
"3. An Openlayer account with API key and inference pipeline ID\n",
"4. The required packages installed:\n",
" - `pip install oci`\n",
" - `pip install openlayer`\n",
"\n",
"## Configuration\n",
"\n",
"### Openlayer Setup\n",
"Set these environment variables before running:\n",
"```bash\n",
"export OPENLAYER_API_KEY=\"your-api-key\"\n",
"export OPENLAYER_INFERENCE_PIPELINE_ID=\"your-pipeline-id\"\n",
"```\n",
"\n",
"### OCI Setup\n",
"Make sure your OCI configuration is properly set up. You can either:\n",
"- Use the default OCI config file (`~/.oci/config`)\n",
"- Set up environment variables\n",
"- Use instance principal authentication (when running on OCI compute)\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Install required packages (uncomment if needed)\n",
"# !pip install oci openlayer\n",
"\n",
"# Set up Openlayer environment variables\n",
"import os\n",
"\n",
"# Configure Openlayer API credentials\n",
"os.environ[\"OPENLAYER_API_KEY\"] = \"your-openlayer-api-key-here\"\n",
"os.environ[\"OPENLAYER_INFERENCE_PIPELINE_ID\"] = \"your-inference-pipeline-id-here\"\n",
"\n",
"# NOTE: Remember to set your actual Openlayer API key and inference pipeline ID!"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import oci\n",
"from oci.generative_ai_inference import GenerativeAiInferenceClient\n",
"from oci.generative_ai_inference.models import Message, ChatDetails, GenericChatRequest\n",
"\n",
"# Import the Openlayer tracer\n",
"from openlayer.lib.integrations import trace_oci_genai"
]
},
{
"cell_type": "markdown",
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"## Initialize OCI Client\n",
"\n",
"Set up the OCI Generative AI client with your configuration.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Configuration - Update these values for your environment\n",
"COMPARTMENT_ID = \"your-compartment-ocid-here\" # Replace with your compartment OCID\n",
"ENDPOINT = \"https://inference.generativeai.us-chicago-1.oci.oraclecloud.com\" # Replace with your region's endpoint\n",
"\n",
"# Load OCI configuration\n",
"config = oci.config.from_file() # Uses default config file location\n",
"# Alternatively, you can specify a custom config file:\n",
"# config = oci.config.from_file(\"~/.oci/config\", \"DEFAULT\")\n",
"\n",
"# Create the OCI Generative AI client\n",
"client = GenerativeAiInferenceClient(config=config, service_endpoint=ENDPOINT)\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"## Apply Openlayer Tracing\n",
"\n",
"Wrap the OCI client with Openlayer tracing to automatically capture all interactions.\n",
"\n",
"The `trace_oci_genai()` function accepts an optional `estimate_tokens` parameter:\n",
"- `estimate_tokens=True` (default): Estimates token counts when not provided by OCI response\n",
"- `estimate_tokens=False`: Returns None for token fields when not available in the response\n",
"\n",
"OCI responses can be either CohereChatResponse or GenericChatResponse, both containing usage information when available.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Apply Openlayer tracing to the OCI client\n",
"# With token estimation enabled (default)\n",
"traced_client = trace_oci_genai(client, estimate_tokens=True)\n",
"\n",
"# Alternative: Disable token estimation to get None values when tokens are not available\n",
"# traced_client = trace_oci_genai(client, estimate_tokens=False)"
]
},
{
"cell_type": "markdown",
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"## Example 1: Non-Streaming Chat Completion\n",
"\n",
"Simple chat completion without streaming.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Create a chat request\n",
"chat_request = GenericChatRequest(\n",
" messages=[Message(role=\"user\", content=\"Hello! Can you explain what Oracle Cloud Infrastructure is?\")],\n",
" model_id=\"cohere.command-r-plus\",\n",
" max_tokens=200,\n",
" temperature=0.7,\n",
" is_stream=False, # Non-streaming\n",
")\n",
"\n",
"chat_details = ChatDetails(compartment_id=COMPARTMENT_ID, chat_request=chat_request)\n",
"\n",
"# Make the request - the tracer will automatically capture it\n",
"response = traced_client.chat(chat_details)\n",
"response"
]
},
{
"cell_type": "markdown",
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"## Example 2: Streaming Chat Completion\n",
"\n",
"Chat completion with streaming enabled to see tokens as they're generated.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Create a streaming chat request\n",
"streaming_chat_request = GenericChatRequest(\n",
" messages=[\n",
" Message(role=\"system\", content=\"You are a helpful AI assistant that provides concise, informative answers.\"),\n",
" Message(role=\"user\", content=\"Tell me a short story about cloud computing and AI working together.\"),\n",
" ],\n",
" model_id=\"meta.llama-3.1-70b-instruct\",\n",
" max_tokens=300,\n",
" temperature=0.8,\n",
" is_stream=True, # Enable streaming\n",
")\n",
"\n",
"streaming_chat_details = ChatDetails(compartment_id=COMPARTMENT_ID, chat_request=streaming_chat_request)\n",
"\n",
"# Make the streaming request\n",
"streaming_response = traced_client.chat(streaming_chat_details)\n",
"\n",
"# Process the streaming response\n",
"full_content = \"\"\n",
"for chunk in streaming_response:\n",
" if hasattr(chunk, \"data\") and hastr(chunk.data, \"choices\"):\n",
" if chunk.data.choices and hasattr(chunk.data.choices[0], \"delta\"):\n",
" delta = chunk.data.choices[0].delta\n",
" if hasattr(delta, \"content\") and delta.content:\n",
" full_content += delta.content\n",
"\n",
"full_content"
]
},
{
"cell_type": "markdown",
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"## Example 3: Custom Parameters and Error Handling\n",
"\n",
"Demonstrate various model parameters and how tracing works with different scenarios.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Advanced parameters example\n",
"advanced_request = GenericChatRequest(\n",
" messages=[Message(role=\"user\", content=\"Write a creative haiku about artificial intelligence.\")],\n",
" model_id=\"meta.llama-3.1-70b-instruct\",\n",
" max_tokens=100,\n",
" temperature=0.9, # High creativity\n",
" top_p=0.8,\n",
" frequency_penalty=0.2, # Reduce repetition\n",
" presence_penalty=0.1,\n",
" stop=[\"\\n\\n\"], # Stop at double newline\n",
" is_stream=False,\n",
")\n",
"\n",
"advanced_details = ChatDetails(compartment_id=COMPARTMENT_ID, chat_request=advanced_request)\n",
"\n",
"response = traced_client.chat(advanced_details)\n",
"response"
]
}
],
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
9 changes: 9 additions & 0 deletions src/openlayer/lib/integrations/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,12 +6,21 @@
# Optional imports - only import if dependencies are available
try:
from .langchain_callback import OpenlayerHandler

__all__.append("OpenlayerHandler")
except ImportError:
pass

try:
from .openai_agents import OpenlayerTracerProcessor

__all__.extend(["OpenlayerTracerProcessor"])
except ImportError:
pass

try:
from .oci_tracer import trace_oci_genai

__all__.extend(["trace_oci_genai"])
except ImportError:
pass
Loading
pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy