-
Notifications
You must be signed in to change notification settings - Fork 644
Closed
Labels
deprecatedIssues for nexaSDK v1, the version before July 23, 2025Issues for nexaSDK v1, the version before July 23, 2025
Description
Issue Description
Okay there are few issues that I am facing related to the AudioLM. And also there is some discrepancies in the documentation as well.
- Transcription end-point is not available after running the local server instead below endpoints where available"
/v1/audio/processing
/v1/audio/processing_stream
But these two models require whisper purely.
nexa run omniaudio
- I ran this to test out the omniaudio which is an AudioLM as mentioned in the README.md. After this I ran the server loading this model and tried the API endpoint/v1/audiolm/chat/completions
Response:
{
"detail": "The model that is loaded is not an AudioLM model. Please use an AudioLM model for audio chat completions."
}
So what I did was tried to check how the Audio is passed to omniaudio
via Streamlit by running nexa run omniaudio -st
but when I tried to do that I got permission error on MAC.
Error during audio processing: Error during inference: Error processing audio file: [Errno 1] Operation not permitted: '/Applications/Nexa.app/Contents/Frameworks/tmp'
Steps to Reproduce
Just running and trying out this will work simply. AudioLM related things are not working via API and Streamlit as well.
OS
MacOS
Python Version
Python-3.12
Nexa SDK Version
0.1.1.0
GPU (if using one)
No response
Metadata
Metadata
Assignees
Labels
deprecatedIssues for nexaSDK v1, the version before July 23, 2025Issues for nexaSDK v1, the version before July 23, 2025