Proxmox/Ollama : llm_benchmark

En passant

J’ai trouvé un outil de test de llm : llm_benchmark ( installation via pip )

Je suis en dernière position : https://llm.aidatatools.com/results-linux.php , avec « llama3.1:8b »: « 1.12 ».

 llm_benchmark run
-------Linux----------
{'id': '0', 'name': 'Quadro 4000', 'driver': '390.157', 'gpu_memory_total': '1985.0 MB',
'gpu_memory_free': '1984.0 MB', 'gpu_memory_used': '1.0 MB', 'gpu_load': '0.0%', 
'gpu_temperature': '60.0°C'}
Only one GPU card
Total memory size : 61.36 GB
cpu_info: Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz
gpu_info: Quadro 4000
os_version: Ubuntu 22.04.5 LTS
ollama_version: 0.5.7
----------
LLM models file path:/usr/local/lib/python3.10/dist-packages/llm_benchmark/data/benchmark_models_16gb_ram.yml
Checking and pulling the following LLM models
phi4:14b
qwen2:7b
gemma2:9b
mistral:7b
llama3.1:8b
llava:7b
llava:13b
----------
model_name =    mistral:7b
prompt = Write a step-by-step guide on how to bake a chocolate cake from scratch.
eval rate:            1.51 tokens/s
prompt = Develop a python function that solves the following problem, sudoku game
eval rate:            1.30 tokens/s
prompt = Create a dialogue between two characters that discusses economic crisis
eval rate:            1.52 tokens/s
prompt = In a forest, there are brave lions living there. Please continue the story.
eval rate:            1.44 tokens/s
prompt = I'd like to book a flight for 4 to Seattle in U.S.
eval rate:            1.25 tokens/s
--------------------
Average of eval rate:  1.404  tokens/s
----------------------------------------
model_name =    llama3.1:8b
prompt = Write a step-by-step guide on how to bake a chocolate cake from scratch.
eval rate:            1.19 tokens/s
prompt = Develop a python function that solves the following problem, sudoku game
eval rate:            1.20 tokens/s
prompt = Create a dialogue between two characters that discusses economic crisis
eval rate:            1.02 tokens/s
prompt = In a forest, there are brave lions living there. Please continue the story.
eval rate:            1.22 tokens/s
prompt = I'd like to book a flight for 4 to Seattle in U.S.
eval rate:            0.95 tokens/s
--------------------
Average of eval rate:  1.116  tokens/s
----------------------------------------
model_name =    phi4:14b
prompt = Write a step-by-step guide on how to bake a chocolate cake from scratch.
eval rate:            0.76 tokens/s
prompt = Develop a python function that solves the following problem, sudoku game
eval rate:            0.76 tokens/s
prompt = Create a dialogue between two characters that discusses economic crisis
eval rate:            0.76 tokens/s
prompt = In a forest, there are brave lions living there. Please continue the story.
eval rate:            0.72 tokens/s
prompt = I'd like to book a flight for 4 to Seattle in U.S.
eval rate:            0.80 tokens/s
--------------------
Average of eval rate:  0.76  tokens/s
----------------------------------------
model_name =    qwen2:7b
prompt = Write a step-by-step guide on how to bake a chocolate cake from scratch.
eval rate:            1.41 tokens/s
prompt = Develop a python function that solves the following problem, sudoku game
eval rate:            1.38 tokens/s
prompt = Create a dialogue between two characters that discusses economic crisis
eval rate:            1.31 tokens/s
prompt = In a forest, there are brave lions living there. Please continue the story.
eval rate:            1.18 tokens/s
prompt = I'd like to book a flight for 4 to Seattle in U.S.
eval rate:            1.29 tokens/s
--------------------
Average of eval rate:  1.314  tokens/s
----------------------------------------
model_name =    gemma2:9b
prompt = Explain Artificial Intelligence and give its applications.
eval rate:            1.01 tokens/s
prompt = How are machine learning and AI related?
eval rate:            1.40 tokens/s
prompt = What is Deep Learning based on?
eval rate:            0.89 tokens/s
prompt = What is the full form of LSTM?
eval rate:            0.86 tokens/s
prompt = What are different components of GAN?
eval rate:            1.00 tokens/s
--------------------
Average of eval rate:  1.032  tokens/s
----------------------------------------
model_name =    llava:7b
prompt = Describe the image, /usr/local/lib/python3.10/dist-packages/llm_benchmark/data/img/sample1.jpg
eval rate:            0.86 tokens/s
prompt = Describe the image, /usr/local/lib/python3.10/dist-packages/llm_benchmark/data/img/sample2.jpg
eval rate:            2.44 tokens/s
prompt = Describe the image, /usr/local/lib/python3.10/dist-packages/llm_benchmark/data/img/sample3.jpg
eval rate:            1.53 tokens/s
prompt = Describe the image, /usr/local/lib/python3.10/dist-packages/llm_benchmark/data/img/sample4.jpg
eval rate:            1.80 tokens/s
prompt = Describe the image, /usr/local/lib/python3.10/dist-packages/llm_benchmark/data/img/sample5.jpg
eval rate:            2.56 tokens/s
--------------------
Average of eval rate:  1.838  tokens/s
----------------------------------------
model_name =    llava:13b
prompt = Describe the image, /usr/local/lib/python3.10/dist-packages/llm_benchmark/data/img/sample1.jpg
eval rate:            0.67 tokens/s
prompt = Describe the image, /usr/local/lib/python3.10/dist-packages/llm_benchmark/data/img/sample2.jpg
eval rate:            0.67 tokens/s
prompt = Describe the image, /usr/local/lib/python3.10/dist-packages/llm_benchmark/data/img/sample3.jpg
eval rate:            0.77 tokens/s
prompt = Describe the image, /usr/local/lib/python3.10/dist-packages/llm_benchmark/data/img/sample4.jpg
eval rate:            1.10 tokens/s
prompt = Describe the image, /usr/local/lib/python3.10/dist-packages/llm_benchmark/data/img/sample5.jpg
eval rate:            0.45 tokens/s
--------------------
Average of eval rate:  0.732  tokens/s
----------------------------------------
Sending the following data to a remote server
-------Linux----------
{'id': '0', 'name': 'Quadro 4000', 'driver': '390.157', 'gpu_memory_total': '1985.0 MB',
 'gpu_memory_free': '1984.0 MB', 'gpu_memory_used': '1.0 MB', 'gpu_load': '0.0%', 
'gpu_temperature': '61.0°C'}
Only one GPU card
-------Linux----------
{'id': '0', 'name': 'Quadro 4000', 'driver': '390.157', 'gpu_memory_total': '1985.0 MB',
 'gpu_memory_free': '1984.0 MB', 'gpu_memory_used': '1.0 MB', 'gpu_load': '0.0%',
 'gpu_temperature': '61.0°C'}
Only one GPU card
{
    "mistral:7b": "1.40",
    "llama3.1:8b": "1.12",
    "phi4:14b": "0.76",
    "qwen2:7b": "1.31",
    "gemma2:9b": "1.03",
    "llava:7b": "1.84",
    "llava:13b": "0.73",
    "uuid": "",
    "ollama_version": "0.5.7"
}
----------

Proxmox : Installation de Ollama en version LXC

Petit test d’installation de Ollama en version LXC via un script :

bash -c "$(wget -qLO - https://github.com/tteck/Proxmox/raw/main/ct/ollama.sh)"

On va voir le résultat … actuellement m’a carte NVIDIA (ou Bios) de supporte pas le Proxmox Passthrough.

root@balkany:~# dmesg | grep -e DMAR -e IOMMU | grep "enable"
[    0.333769] DMAR: IOMMU enabled

root@balkany:~# dmesg | grep 'remapping'
[    0.821036] DMAR-IR: Enabled IRQ remapping in xapic mode
[    0.821038] x2apic: IRQ remapping doesn't support X2APIC mode

# lspci -nn | grep 'NVIDIA'
0a:00.0 VGA compatible controller [0300]: NVIDIA Corporation GF100GL [Quadro 4000] [10de:06dd] (rev a3)
0a:00.1 Audio device [0403]: NVIDIA Corporation GF100 High Definition Audio Controller [10de:0be5] (rev a1)

# cat /etc/default/grub | grep "GRUB_CMDLINE_LINUX_DEFAULT"
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt video=vesafb:off video=efifb:off initcall_blacklist=sysfb_init

# efibootmgr -v
EFI variables are not supported on this system.

# cat /etc/modules
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

# cat /etc/modprobe.d/pve-blacklist.conf | grep nvidia
blacklist nvidiafb
blacklist nvidia

J’ai donc ajouter ceci :

# cat  /etc/modprobe.d/iommu_unsafe_interrupts.conf
options vfio_iommu_type1 allow_unsafe_interrupts=1

J’ai bien un seul groupe iommugroup pour la carte NVIDIA :

Quand je lance le script cela termine par une erreur :

 
  ____  ____
  / __ \/ / /___ _____ ___  ____ _
 / / / / / / __ `/ __ `__ \/ __ `/
/ /_/ / / / /_/ / / / / / / /_/ /
\____/_/_/\__,_/_/ /_/ /_/\__,_/

Using Default Settings
Using Distribution: ubuntu
Using ubuntu Version: 22.04
Using Container Type: 1
Using Root Password: Automatic Login
Using Container ID: 114
Using Hostname: ollama
Using Disk Size: 24GB
Allocated Cores 4
Allocated Ram 4096
Using Bridge: vmbr0
Using Static IP Address: dhcp
Using Gateway IP Address: Default
Using Apt-Cacher IP Address: Default
Disable IPv6: No
Using Interface MTU Size: Default
Using DNS Search Domain: Host
Using DNS Server Address: Host
Using MAC Address: Default
Using VLAN Tag: Default
Enable Root SSH Access: No
Enable Verbose Mode: No
Creating a Ollama LXC using the above default settings
 ✓ Using datastore2 for Template Storage.
 ✓ Using datastore2 for Container Storage.
 ✓ Updated LXC Template List
 ✓ LXC Container 114 was successfully created.
 ✓ Started LXC Container
bash: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8)
 //bin/bash: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8)
 ✓ Set up Container OS
 ✓ Network Connected: 192.168.1.45 
 ✓ IPv4 Internet Connected
 ✗ IPv6 Internet Not Connected
 ✓ DNS Resolved github.com to 140.82.121.3
 ✓ Updated Container OS
 ✓ Installed Dependencies
 ✓ Installed Golang
 ✓ Set up Intel® Repositories
 ✓ Set Up Hardware Acceleration
 ✓ Installed Intel® oneAPI Base Toolkit
 / Installing Ollama (Patience)   
[ERROR] in line 23: exit code 0: while executing command "$@" > /dev/null 2>&1
The silent function has suppressed the error, run the script with verbose mode enabled, which will provide more detailed output.

Misère.

Ollama n’utilise pas le GPU de la carte … Misère

En passant

Ma version d’OS est « Ubuntu 22.04.5 LTS« .

Ma version de carte/drivers NVIDIA :

# uname -a
Linux 5.15.0-130-generic #140-Ubuntu SMP Wed Dec 18 17:59:53 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux

# nvidia-smi -L
GPU 0: Quadro 4000 (UUID: GPU-13797e5d-a72f-4c72-609f-686fa4a8c956)

# nvidia-smi 
Mon Jan 20 16:41:52 2025       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 390.157                Driver Version: 390.157                   |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Quadro 4000         Off  | 00000000:00:10.0 Off |                  N/A |
| 36%   62C   P12    N/A /  N/A |      1MiB /  1985MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

# cat /proc/driver/nvidia/version
NVRM version: NVIDIA UNIX x86_64 Kernel Module  390.157  Wed Oct 12 09:19:07 UTC 2022
GCC version:  gcc version 11.4.0 (Ubuntu 11.4.0-1ubuntu1~22.04)

# nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2024 NVIDIA Corporation
Built on Tue_Oct_29_23:50:19_PDT_2024
Cuda compilation tools, release 12.6, V12.6.85
Build cuda_12.6.r12.6/compiler.35059454_0

# ubuntu-drivers devices
== /sys/devices/pci0000:00/0000:00:10.0 ==
modalias : pci:v000010DEd000006DDsv000010DEsd00000780bc03sc00i00
vendor   : NVIDIA Corporation
model    : GF100GL [Quadro 4000]
driver   : nvidia-driver-390 - distro non-free recommended
driver   : xserver-xorg-video-nouveau - distro free builtin


Journal de ollama :

# journalctl -u ollama -f --no-pager
ollama[2974]: llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 32, can_shift = 1
ollama[2974]: llama_kv_cache_init:        CPU KV buffer size =  1024.00 MiB
ollama[2974]: llama_new_context_with_model: KV self size  = 1024.00 MiB, K (f16):  512.00 MiB, V (f16):  512.00 MiB
ollama[2974]: llama_new_context_with_model:        CPU  output buffer size =     0.56 MiB
ollama[2974]: llama_new_context_with_model:        CPU compute buffer size =   560.01 MiB
ollama[2974]: llama_new_context_with_model: graph nodes  = 1030
ollama[2974]: llama_new_context_with_model: graph splits = 1
ollama[2974]: time=2025-01-20T16:30:12.070Z level=INFO source=server.go:594 msg="llama runner started in 4.28 seconds"

Quand je fais l’installation j’ai bien « NVIDIA GPU installed »

# curl -fsSL https://ollama.com/install.sh | shsh
>>> Cleaning up old version at /usr/local/lib/ollama
>>> Installing ollama to /usr/local
>>> Downloading Linux amd64 bundle
######################################################################## 100,0%
>>> Adding ollama user to render group...
>>> Adding ollama user to video group...
>>> Adding current user to ollama group...
>>> Creating ollama systemd service...
>>> Enabling and starting ollama service...
>>> NVIDIA GPU installed.

J’ai un problème de chargement

ollama[3917]: time=2025-01-20T16:52:30.680Z level=INFO source=routes.go:1238 msg="Listening on 127.0.0.1:11434 (version 0.5.7)"
ollama[3917]: time=2025-01-20T16:52:30.681Z level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cuda_v12_avx rocm_avx cpu cpu_avx cpu_avx2 cuda_v11_avx]"
ollama[3917]: time=2025-01-20T16:52:30.681Z level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
ollama[3917]: time=2025-01-20T16:52:30.702Z level=INFO source=gpu.go:630 msg="Unable to load cudart library /usr/lib/x86_64-linux-gnu/libcuda.so.390.157: symbol lookup for cuDeviceGetUuid failed: /usr/lib/x86_64-linux-gnu/libcuda.so.390.157: undefined symbol: cuDeviceGetUuid"
ollama[3917]: time=2025-01-20T16:52:30.741Z level=INFO source=gpu.go:392 msg="no compatible GPUs were discovered"
ollama[3917]: time=2025-01-20T16:52:30.742Z level=INFO source=types.go:131 msg="inference compute" id=0 library=cpu variant=avx compute="" driver=0.0 name="" total="61.4 GiB" available="59.4 GiB"

A suivre …

Proxmox : Resize disk on Ubuntu 22

En passant

Passage de 98Go à 392G, sans aucun problème.

La première étape se fait via l’IHM de Proxmox, ensuite il faut lancer ses commandes sur Ubuntu.

fdisk -l

Disk /dev/sda: 400 GiB, 429496729600 bytes, 838860800 sectors
Disk model: QEMU HARDDISK
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 0588156E-1871-4A3D-900F-4C8C2758E02E

Device Start End Sectors Size Type
/dev/sda1 2048 4095 2048 1M BIOS boot
/dev/sda2 4096 4198399 4194304 2G Linux filesystem
/dev/sda3 4198400 838860766 834662367 398G Linux filesystem

Disk /dev/mapper/ubuntu--vg-ubuntu--lv: 100 GiB, 107374182400 bytes, 209715200 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

df -h

Filesystem Size Used Avail Use% Mounted on
tmpfs 6,2G 1,2M 6,2G 1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv 98G 89G 4,2G 96% /
tmpfs 31G 4,0K 31G 1% /dev/shm
tmpfs 5,0M 0 5,0M 0% /run/lock

sudo pvdisplay

--- Physical volume ---
PV Name /dev/sda3
VG Name ubuntu-vg
PV Size <398,00 GiB / not usable 16,50 KiB
Allocatable yes
PE Size 4,00 MiB
Total PE 101887
Free PE 76287
Allocated PE 25600
PV UUID kJRjOE-1iPT-CVJQ-7QyB-c8I2-ndQQ-Uzi9VE

pvresize /dev/sda3

Physical volume "/dev/sda3" changed
1 physical volume(s) resized or updated / 0 physical volume(s) not resize

sudo lvextend -l +100%FREE /dev/ubuntu-vg/ubuntu-lv

Size of logical volume ubuntu-vg/ubuntu-lv changed from 100,00 GiB (25600 extents) to <398,00 GiB (101887 extents).
Logical volume ubuntu-vg/ubuntu-lv successfully resized.

sudo lvdisplay

--- Logical volume ---
LV Path /dev/ubuntu-vg/ubuntu-lv
LV Name ubuntu-lv
VG Name ubuntu-vg
LV UUID l8Obv4-PXVy-VEsm-db9B-5yZ8-Ybmi-010JO9
LV Write Access read/write
LV Creation host, time ubuntu-server, 2024-02-08 09:41:27 +0000
LV Status available
# open 1
LV Size <398,00 GiB
Current LE 101887
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0

sudo resize2fs /dev/ubuntu-vg/ubuntu-lv

resize2fs 1.46.5 (30-Dec-2021)
Filesystem at /dev/ubuntu-vg/ubuntu-lv is mounted on /; on-line resizing required
old_desc_blocks = 13, new_desc_blocks = 50
The filesystem on /dev/ubuntu-vg/ubuntu-lv is now 104332288 (4k) blocks long.

df -h

Filesystem Size Used Avail Use% Mounted on
tmpfs 6,2G 1,2M 6,2G 1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv 392G 89G 286G 24% /
tmpfs 31G 4,0K 31G 1% /dev/shm
tmpfs 5,0M 0 5,0M 0% /run/lock