Wanderer : La dernière version est compatible ActivityPub !

En passant

Le logiciel https://wanderer.to/ que j’ai mis sur mon proxmox : https://aventures.cyber-neurones.org/ est maintenant stable ( v0.17.1 )  et compatible avec ActivityPub .
Le seul point négatif c’est que les liens ne sont plus les mêmes … mais c’est pas grave.

J’ai donc payé 5 autres cafés : coff.ee/wanderertrails

A suivre.

Activités personnelles informatique depuis le début d’année

Activités informatiques depuis le debut de l’année (je fais des articles que sur des problèmes en général) :

Proxmox/Ollama : llm_benchmark

En passant

J’ai trouvé un outil de test de llm : llm_benchmark ( installation via pip )

Je suis en dernière position : https://llm.aidatatools.com/results-linux.php , avec « llama3.1:8b »: « 1.12 ».

 llm_benchmark run
-------Linux----------
{'id': '0', 'name': 'Quadro 4000', 'driver': '390.157', 'gpu_memory_total': '1985.0 MB',
'gpu_memory_free': '1984.0 MB', 'gpu_memory_used': '1.0 MB', 'gpu_load': '0.0%', 
'gpu_temperature': '60.0°C'}
Only one GPU card
Total memory size : 61.36 GB
cpu_info: Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz
gpu_info: Quadro 4000
os_version: Ubuntu 22.04.5 LTS
ollama_version: 0.5.7
----------
LLM models file path:/usr/local/lib/python3.10/dist-packages/llm_benchmark/data/benchmark_models_16gb_ram.yml
Checking and pulling the following LLM models
phi4:14b
qwen2:7b
gemma2:9b
mistral:7b
llama3.1:8b
llava:7b
llava:13b
----------
....
----------------------------------------
Sending the following data to a remote server
-------Linux----------
{'id': '0', 'name': 'Quadro 4000', 'driver': '390.157', 'gpu_memory_total': '1985.0 MB',
 'gpu_memory_free': '1984.0 MB', 'gpu_memory_used': '1.0 MB', 'gpu_load': '0.0%', 
'gpu_temperature': '61.0°C'}
Only one GPU card
-------Linux----------
{'id': '0', 'name': 'Quadro 4000', 'driver': '390.157', 'gpu_memory_total': '1985.0 MB',
 'gpu_memory_free': '1984.0 MB', 'gpu_memory_used': '1.0 MB', 'gpu_load': '0.0%',
 'gpu_temperature': '61.0°C'}
Only one GPU card
{
    "mistral:7b": "1.40",
    "llama3.1:8b": "1.12",
    "phi4:14b": "0.76",
    "qwen2:7b": "1.31",
    "gemma2:9b": "1.03",
    "llava:7b": "1.84",
    "llava:13b": "0.73",
    "uuid": "",
    "ollama_version": "0.5.7"
}
----------

Proxmox : Installation de Ollama en version LXC

Petit test d’installation de Ollama en version LXC via un script :

bash -c "$(wget -qLO - https://github.com/tteck/Proxmox/raw/main/ct/ollama.sh)"

On va voir le résultat … actuellement m’a carte NVIDIA (ou Bios) de supporte pas le Proxmox Passthrough.

root@balkany:~# dmesg | grep -e DMAR -e IOMMU | grep "enable"
[    0.333769] DMAR: IOMMU enabled

root@balkany:~# dmesg | grep 'remapping'
[    0.821036] DMAR-IR: Enabled IRQ remapping in xapic mode
[    0.821038] x2apic: IRQ remapping doesn't support X2APIC mode

# lspci -nn | grep 'NVIDIA'
0a:00.0 VGA compatible controller [0300]: NVIDIA Corporation GF100GL [Quadro 4000] [10de:06dd] (rev a3)
0a:00.1 Audio device [0403]: NVIDIA Corporation GF100 High Definition Audio Controller [10de:0be5] (rev a1)

# cat /etc/default/grub | grep "GRUB_CMDLINE_LINUX_DEFAULT"
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt video=vesafb:off video=efifb:off initcall_blacklist=sysfb_init

# efibootmgr -v
EFI variables are not supported on this system.

# cat /etc/modules
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

# cat /etc/modprobe.d/pve-blacklist.conf | grep nvidia
blacklist nvidiafb
blacklist nvidia

J’ai donc ajouter ceci :

# cat  /etc/modprobe.d/iommu_unsafe_interrupts.conf
options vfio_iommu_type1 allow_unsafe_interrupts=1

J’ai bien un seul groupe iommugroup pour la carte NVIDIA :

Quand je lance le script cela termine par une erreur :

 
  ____  ____
  / __ \/ / /___ _____ ___  ____ _
 / / / / / / __ `/ __ `__ \/ __ `/
/ /_/ / / / /_/ / / / / / / /_/ /
\____/_/_/\__,_/_/ /_/ /_/\__,_/

Using Default Settings
Using Distribution: ubuntu
Using ubuntu Version: 22.04
Using Container Type: 1
Using Root Password: Automatic Login
Using Container ID: 114
Using Hostname: ollama
Using Disk Size: 24GB
Allocated Cores 4
Allocated Ram 4096
Using Bridge: vmbr0
Using Static IP Address: dhcp
Using Gateway IP Address: Default
Using Apt-Cacher IP Address: Default
Disable IPv6: No
Using Interface MTU Size: Default
Using DNS Search Domain: Host
Using DNS Server Address: Host
Using MAC Address: Default
Using VLAN Tag: Default
Enable Root SSH Access: No
Enable Verbose Mode: No
Creating a Ollama LXC using the above default settings
 ✓ Using datastore2 for Template Storage.
 ✓ Using datastore2 for Container Storage.
 ✓ Updated LXC Template List
 ✓ LXC Container 114 was successfully created.
 ✓ Started LXC Container
bash: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8)
 //bin/bash: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8)
 ✓ Set up Container OS
 ✓ Network Connected: 192.168.1.45 
 ✓ IPv4 Internet Connected
 ✗ IPv6 Internet Not Connected
 ✓ DNS Resolved github.com to 140.82.121.3
 ✓ Updated Container OS
 ✓ Installed Dependencies
 ✓ Installed Golang
 ✓ Set up Intel® Repositories
 ✓ Set Up Hardware Acceleration
 ✓ Installed Intel® oneAPI Base Toolkit
 / Installing Ollama (Patience)   
[ERROR] in line 23: exit code 0: while executing command "$@" > /dev/null 2>&1
The silent function has suppressed the error, run the script with verbose mode enabled, which will provide more detailed output.

Misère.