-
Notifications
You must be signed in to change notification settings - Fork 10.5k
Issues: ollama/ollama
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
msg="llama runner terminated" error="exit status 2"
bug
Something isn't working
#9330
opened Feb 25, 2025 by
suiyuejinghao2024
MultiModality [500] Internal Server Error - {"error":"POST predict: Post \"http://127.0.0.1:54956/completion\": EOF"}
bug
Something isn't working
#9329
opened Feb 25, 2025 by
OnceCrazyer
Windows portable mode?
feature request
New feature or request
#9328
opened Feb 25, 2025 by
OfficiallyCrazy
How to Enable Flash Attention in Ollama Docker Deployment?
feature request
New feature or request
#9327
opened Feb 25, 2025 by
lixiangge
ollama create can't detect Modelfile
bug
Something isn't working
#9326
opened Feb 25, 2025 by
ConnorTippets
Embedding failed after some requests
bug
Something isn't working
#9325
opened Feb 25, 2025 by
QichangZheng
automatically set context length to model context length
feature request
New feature or request
#9323
opened Feb 24, 2025 by
ParthSareen
Ubuntu 24.10 128G RAM, 2x RTX A4000 ollama run deepseek-r1:70b-llama-distill-fp16 crashes desktop env.
bug
Something isn't working
#9322
opened Feb 24, 2025 by
aloeppert
tcp: lookup dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com: no such host
bug
Something isn't working
#9320
opened Feb 24, 2025 by
mohitnandaniya-devloper
⚡ Ollama parallel configuration tweaks for more workloads on the same server
feature request
New feature or request
#9319
opened Feb 24, 2025 by
Fade78
Unsupported JetPack version detected. GPU may not be supported
bug
Something isn't working
#9317
opened Feb 24, 2025 by
fedekrum
How to fix this failed to fix semaphore error? is it related to CONCURRENCY_TASK_LIMIt
bug
Something isn't working
#9316
opened Feb 24, 2025 by
Jaykumaran
Feature Request 👍 微信交流群
feature request
New feature or request
#9314
opened Feb 24, 2025 by
godrobin1
llama3.2-vision really slow when already in VRAM - high load duration
bug
Something isn't working
#9311
opened Feb 24, 2025 by
ribbles
8 GPUs want to start 8 same models
feature request
New feature or request
#9310
opened Feb 24, 2025 by
AltenLi
需要ollama支持加载同个模型的多个gguf文件
feature request
New feature or request
#9309
opened Feb 24, 2025 by
MusicOfWind
simplescaling/s1 by Stanford
model request
Model requests
#9308
opened Feb 24, 2025 by
nileshtrivedi
Don't let politics pollutes the community: request removal of r1-1776
model request
Model requests
#9307
opened Feb 24, 2025 by
ttimasdf
ollama --version always return 0.0.0
bug
Something isn't working
#9303
opened Feb 23, 2025 by
undici77
Ollama does not work with Linux kernel 5.15.173 using AMD GPU (ROCm error)
bug
Something isn't working
#9302
opened Feb 23, 2025 by
rkrisztian
Add response to logs
feature request
New feature or request
#9301
opened Feb 23, 2025 by
The-LittleTeapot
Ollama Model download problem (Restarting) on windows
bug
Something isn't working
#9300
opened Feb 23, 2025 by
SudoMds
Previous Next
ProTip!
Type g i on any issue or pull request to go back to the issue listing page.