Ma version d’OS est « Ubuntu 22.04.5 LTS« .
Ma version de carte/drivers NVIDIA :
# uname -a Linux 5.15.0-130-generic #140-Ubuntu SMP Wed Dec 18 17:59:53 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux # nvidia-smi -L GPU 0: Quadro 4000 (UUID: GPU-13797e5d-a72f-4c72-609f-686fa4a8c956) # nvidia-smi Mon Jan 20 16:41:52 2025 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 390.157 Driver Version: 390.157 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Quadro 4000 Off | 00000000:00:10.0 Off | N/A | | 36% 62C P12 N/A / N/A | 1MiB / 1985MiB | 0% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+ # cat /proc/driver/nvidia/version NVRM version: NVIDIA UNIX x86_64 Kernel Module 390.157 Wed Oct 12 09:19:07 UTC 2022 GCC version: gcc version 11.4.0 (Ubuntu 11.4.0-1ubuntu1~22.04) # nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2024 NVIDIA Corporation Built on Tue_Oct_29_23:50:19_PDT_2024 Cuda compilation tools, release 12.6, V12.6.85 Build cuda_12.6.r12.6/compiler.35059454_0 # ubuntu-drivers devices == /sys/devices/pci0000:00/0000:00:10.0 == modalias : pci:v000010DEd000006DDsv000010DEsd00000780bc03sc00i00 vendor : NVIDIA Corporation model : GF100GL [Quadro 4000] driver : nvidia-driver-390 - distro non-free recommended driver : xserver-xorg-video-nouveau - distro free builtin
Journal de ollama :
# journalctl -u ollama -f --no-pager ollama[2974]: llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 32, can_shift = 1 ollama[2974]: llama_kv_cache_init: CPU KV buffer size = 1024.00 MiB ollama[2974]: llama_new_context_with_model: KV self size = 1024.00 MiB, K (f16): 512.00 MiB, V (f16): 512.00 MiB ollama[2974]: llama_new_context_with_model: CPU output buffer size = 0.56 MiB ollama[2974]: llama_new_context_with_model: CPU compute buffer size = 560.01 MiB ollama[2974]: llama_new_context_with_model: graph nodes = 1030 ollama[2974]: llama_new_context_with_model: graph splits = 1 ollama[2974]: time=2025-01-20T16:30:12.070Z level=INFO source=server.go:594 msg="llama runner started in 4.28 seconds"
Quand je fais l’installation j’ai bien « NVIDIA GPU installed »
# curl -fsSL https://ollama.com/install.sh | shsh >>> Cleaning up old version at /usr/local/lib/ollama >>> Installing ollama to /usr/local >>> Downloading Linux amd64 bundle ######################################################################## 100,0% >>> Adding ollama user to render group... >>> Adding ollama user to video group... >>> Adding current user to ollama group... >>> Creating ollama systemd service... >>> Enabling and starting ollama service... >>> NVIDIA GPU installed.
J’ai un problème de chargement
ollama[3917]: time=2025-01-20T16:52:30.680Z level=INFO source=routes.go:1238 msg="Listening on 127.0.0.1:11434 (version 0.5.7)" ollama[3917]: time=2025-01-20T16:52:30.681Z level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cuda_v12_avx rocm_avx cpu cpu_avx cpu_avx2 cuda_v11_avx]" ollama[3917]: time=2025-01-20T16:52:30.681Z level=INFO source=gpu.go:226 msg="looking for compatible GPUs" ollama[3917]: time=2025-01-20T16:52:30.702Z level=INFO source=gpu.go:630 msg="Unable to load cudart library /usr/lib/x86_64-linux-gnu/libcuda.so.390.157: symbol lookup for cuDeviceGetUuid failed: /usr/lib/x86_64-linux-gnu/libcuda.so.390.157: undefined symbol: cuDeviceGetUuid" ollama[3917]: time=2025-01-20T16:52:30.741Z level=INFO source=gpu.go:392 msg="no compatible GPUs were discovered" ollama[3917]: time=2025-01-20T16:52:30.742Z level=INFO source=types.go:131 msg="inference compute" id=0 library=cpu variant=avx compute="" driver=0.0 name="" total="61.4 GiB" available="59.4 GiB"
A suivre …