Passa al contenuto principale

Requisiti di Sistema

Requisiti hardware e software per lo sviluppo e deployment della piattaforma Emblema.

πŸ“‹ Overview​

Emblema Γ¨ una piattaforma AI complessa che combina elaborazione di documenti, modelli linguistici, database vettoriali e interfacce web. I requisiti variano in base al tipo di deployment e alle funzionalitΓ  utilizzate.

πŸ–₯️ Requisiti Hardware​

Sviluppo Locale (Minimo)​

CPU: 4 core (Intel i5 o AMD Ryzen 5 equivalente)
RAM: 16 GB
Storage: 50 GB SSD disponibili
Network: Connessione internet stabile per Docker images

Sviluppo Locale (Raccomandato)​

CPU: 8 core (Intel i7/i9 o AMD Ryzen 7/9)
RAM: 32 GB
Storage: 100 GB NVMe SSD
GPU: NVIDIA RTX 3060 o superiore (per AI locale)
Network: Banda larga (>50 Mbps)

Production (Minimo)​

CPU: 16 core
RAM: 64 GB
Storage: 500 GB NVMe SSD
Network: 1 Gbps
Load Balancer: Supporto SSL/TLS

Production con AI On-Premise (Requisiti Minimi)​

CPU: 32+ core
RAM: 128+ GB
Storage: 2+ TB NVMe SSD
GPU: 2x NVIDIA RTX ADA 6000 (96GB VRAM totali)
Network: 10 Gbps
Backup: Sistema backup automatico
OS: Ubuntu 22.04 LTS (SOLO SUPPORTATO)

Considerazioni GPU per AI​

# Verifica compatibilitΓ  GPU
nvidia-smi

# Requisiti NVIDIA OBBLIGATORI per Production:
# - GPU: 2x NVIDIA RTX ADA 6000 (48GB VRAM ciascuna)
# - VRAM Totale: 96GB (minimo assoluto)
# - Driver: β‰₯525.60.13 (per ADA 6000)
# - CUDA: β‰₯12.0 (per supporto ADA architecture)
# - Docker: nvidia-container-toolkit β‰₯1.14.0
# - OS: Ubuntu 22.04 LTS ESCLUSIVAMENTE

GPU Supportate per Production:

  • NVIDIA RTX ADA 6000: 48GB VRAM x 2 = 96GB - MINIMO OBBLIGATORIO
  • NVIDIA RTX ADA 8000: 48GB VRAM x 2 = 96GB - Alternativa equivalente
  • NVIDIA H100: 80GB VRAM x 2 = 160GB - Configurazione superiore

GPU Supportate per Development:

  • NVIDIA RTX 4090: 24GB VRAM - Per test limitati
  • NVIDIA RTX 3080/3090: 10-24GB VRAM - Solo per sviluppo componenti singoli

πŸ’Ώ Software di Base​

Sistema Operativo​

Supporto OS Limitato

PRODUCTION: Solo Ubuntu 22.04 LTS Γ¨ supportato per deployment on-premise con AI completa.

Ubuntu 22.04 LTS (OBBLIGATORIO per Production)​

  • Ubuntu: 22.04 LTS - UNICO OS SUPPORTATO per on-premise AI
  • Kernel: β‰₯5.15.0 (incluso in Ubuntu 22.04)
  • Architettura: x86_64 obbligatoria
  • Driver NVIDIA: β‰₯525.60.13 per ADA 6000
  • CUDA Toolkit: β‰₯12.0

Development (Limitato)​

  • Ubuntu: 20.04 LTS, 22.04 LTS - Per sviluppo componenti
  • macOS: 11.0+ - Solo per frontend development
  • Windows: 10/11 Pro con WSL2 - Solo per frontend development
Limitazioni Development

Su macOS e Windows Γ¨ possibile sviluppare solo componenti frontend. L'AI stack completo richiede Ubuntu 22.04 con GPU ADA 6000.

Containerizzazione​

# Docker (Obbligatorio)
Docker Engine: β‰₯20.10
Docker Compose: β‰₯2.20 (con sintassi include)
Docker Desktop: Latest (Windows/Mac)

# Verifica versioni
docker --version
docker compose version

# NVIDIA Container Toolkit (per GPU)
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit
sudo systemctl restart docker

Development Tools​

# Node.js e Package Manager
Node.js: β‰₯20.11.0 LTS
pnpm: β‰₯8.10.0
npm: β‰₯10.0.0 (fallback)

# Verifica installazione
node --version
pnpm --version

# Python (per servizi backend)
Python: β‰₯3.10, <3.12
uv: Latest (package manager)
pip: β‰₯23.0 (fallback)

# Verifica installazione Python
python --version
uv --version

# Git
Git: β‰₯2.30
GitHub CLI: Latest (opzionale ma raccomandato)

# Verifica Git
git --version
gh --version

πŸ”§ Dipendenze Specifiche​

Frontend Development​

{
"engines": {
"node": ">=20.11.0",
"pnpm": ">=8.10.0"
},
"devDependencies": {
"turbo": "^1.x.x",
"typescript": "^5.x.x",
"@types/node": "^20.x.x",
"eslint": "^8.x.x",
"prettier": "^3.x.x"
}
}

IDE Extensions raccomandati per VS Code:

  • TypeScript and JavaScript Language Features
  • ES7+ React/Redux/React-Native snippets
  • Tailwind CSS IntelliSense
  • ESLint
  • Prettier
  • Auto Rename Tag
  • Bracket Pair Colorizer

Backend Development (Python)​

# pyproject.toml minimo per ogni servizio
[project]
requires-python = ">=3.10,<3.12"
dependencies = [
"fastapi>=0.104.0",
"uvicorn>=0.24.0",
"pydantic>=2.4.0",
"sqlalchemy>=2.0.0",
"celery>=5.3.0",
"redis>=5.0.0",
]

[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"

[tool.uv]
dev-dependencies = [
"pytest>=7.4.0",
"httpx>=0.25.0",
"pytest-asyncio>=0.21.0",
]

AI/ML Dependencies​

# Sistema (Ubuntu/Debian)
sudo apt-get update && sudo apt-get install -y \
build-essential \
cmake \
libgl1-mesa-glx \
libglib2.0-0 \
libsm6 \
libxext6 \
libxrender-dev \
libgomp1 \
libmagic1 \
poppler-utils \
libreoffice \
pandoc \
tesseract-ocr \
tesseract-ocr-ita \
ffmpeg \
imagemagick

# Python AI/ML packages
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
pip install transformers>=4.35.0
pip install sentence-transformers>=2.2.2
pip install FlagEmbedding>=1.2.0 # BGE-M3
pip install whisperx>=3.1.0
pip install pymilvus>=2.3.0

Database Requirements​

# PostgreSQL
Version: β‰₯15.0
Extensions:
- uuid-ossp
- pg_trgm
- btree_gin
Storage: β‰₯100GB per environment

# Redis
Version: β‰₯7.0
Memory: β‰₯8GB per environment
Persistence: AOF + RDB

# Milvus (Vector Database)
Version: β‰₯2.3.0
Storage: β‰₯200GB per environment
Memory: β‰₯16GB allocated

🌐 Network e Porte​

Porte di Default​

# Frontend
www-emblema: 3000
doc-emblema: 3001

# Backend Services
background-task: 8000
document-render: 8094
mcp-demo: 8095
novu-bridge: 8096

# Infrastructure
postgresql: 5432
redis: 6379
milvus: 19530
minio: 9000, 9001
keycloak: 8180
hasura: 8080
traefik: 80, 443
grafana: 3030
prometheus: 9090
flower: 5555
litellm: 4000

Firewall Configuration​

# Porte da aprire (production)
# HTTP/HTTPS
ufw allow 80/tcp
ufw allow 443/tcp

# SSH (cambiare porta default)
ufw allow 2222/tcp

# Monitoring (limitare per IP)
ufw allow from 10.0.0.0/8 to any port 3030 # Grafana
ufw allow from 10.0.0.0/8 to any port 9090 # Prometheus

# Database (solo rete interna)
# Non esporre PostgreSQL, Redis, Milvus pubblicamente

☁️ Cloud Requirements​

Docker Images Storage​

# Registry space necessario
Total images: ~15-20 GB (all services)
Base images: ~8 GB (Node.js, Python, CUDA)
Application images: ~10-12 GB

# Bandwidth per deployment completo
Download: ~20 GB (prima volta)
Incremental: ~500 MB - 2 GB (aggiornamenti)

Resource Scaling Guidelines​

Requisiti GPU Minimi

Per qualsiasi deployment on-premise sono OBBLIGATORIE minimo 2x NVIDIA RTX ADA 6000 (96GB VRAM totali).

Concurrent UsersCPU CoresRAM (GB)Storage (GB)GPU Setup Minimo
1-503212820002x ADA 6000 (96GB)
50-2006425650002x ADA 6000 (96GB)
200-500128512100004x ADA 6000 (192GB)
500-10002561024200006x ADA 6000 (288GB)
1000+512+2048+50000+8x ADA 6000+ (384GB+)

πŸ§ͺ Testing dei Requisiti​

Verifica Ambiente Base​

#!/bin/bash
# scripts/check-requirements.sh

echo "πŸ” Verificando requisiti di sistema..."

# Check Docker
if command -v docker &> /dev/null; then
echo "βœ… Docker: $(docker --version)"
else
echo "❌ Docker non trovato"
exit 1
fi

# Check Docker Compose
if docker compose version &> /dev/null; then
echo "βœ… Docker Compose: $(docker compose version --short)"
else
echo "❌ Docker Compose non supporta sintassi v2"
exit 1
fi

# Check Node.js
if command -v node &> /dev/null; then
NODE_VERSION=$(node --version | cut -d'v' -f2)
if [ "$(printf '%s\n' "20.11.0" "$NODE_VERSION" | sort -V | head -n1)" = "20.11.0" ]; then
echo "βœ… Node.js: v$NODE_VERSION"
else
echo "⚠️ Node.js: v$NODE_VERSION (< 20.11.0 raccomandato)"
fi
else
echo "❌ Node.js non trovato"
fi

# Check pnpm
if command -v pnpm &> /dev/null; then
echo "βœ… pnpm: $(pnpm --version)"
else
echo "❌ pnpm non trovato"
fi

# Check Python
if command -v python3 &> /dev/null; then
PYTHON_VERSION=$(python3 --version | cut -d' ' -f2)
echo "βœ… Python: $PYTHON_VERSION"
else
echo "❌ Python non trovato"
fi

# Check uv
if command -v uv &> /dev/null; then
echo "βœ… uv: $(uv --version)"
else
echo "⚠️ uv non trovato (raccomandato per Python)"
fi

# Check GPU (OBBLIGATORIO per production)
if command -v nvidia-smi &> /dev/null; then
echo "βœ… NVIDIA GPU disponibili:"
nvidia-smi --query-gpu=name,memory.total --format=csv,noheader,nounits

# Verifica se ci sono ADA 6000
ADA_COUNT=$(nvidia-smi --query-gpu=name --format=csv,noheader,nounits | grep -c "ADA 6000" || echo "0")
TOTAL_VRAM=$(nvidia-smi --query-gpu=memory.total --format=csv,noheader,nounits | awk '{sum += $1} END {print sum}')

echo "GPU ADA 6000 rilevate: $ADA_COUNT"
echo "VRAM totale: ${TOTAL_VRAM}MB"

if [ "$ADA_COUNT" -ge 2 ] && [ "$TOTAL_VRAM" -ge 98304 ]; then # 96GB = 98304MB
echo "βœ… Requisiti GPU per production SODDISFATTI"
else
echo "❌ Requisiti GPU per production NON SODDISFATTI"
echo " Necessarie: 2x NVIDIA RTX ADA 6000 (96GB VRAM totali)"
fi
else
echo "❌ Nessuna GPU NVIDIA rilevata - OBBLIGATORIA per on-premise AI"
fi

# Check available disk space
AVAILABLE_SPACE=$(df -h . | awk 'NR==2 {print $4}')
echo "πŸ’Ύ Spazio disponibile: $AVAILABLE_SPACE"

# Check memory
TOTAL_MEM=$(free -h | awk 'NR==2{print $2}')
echo "🧠 RAM totale: $TOTAL_MEM"

echo "βœ… Verifica completata!"

Test Performance​

#!/bin/bash
# scripts/performance-test.sh

echo "⚑ Testing performance di base..."

# CPU benchmark
echo "πŸ”₯ CPU Test (30 secondi):"
timeout 30s yes > /dev/null &
CPULOAD_PID=$!
sleep 2
TOP_CPU=$(top -bn1 | grep "yes" | awk '{print $9}')
kill $CPULOAD_PID 2>/dev/null
echo " CPU utilizzabile: ~${TOP_CPU}%"

# Memory test
echo "🧠 Memory Test:"
AVAILABLE_MEM=$(free -m | awk 'NR==2{printf "%.1f", $7/1024}')
echo " RAM disponibile: ${AVAILABLE_MEM}GB"

# Disk I/O test
echo "πŸ’Ώ Disk I/O Test:"
WRITE_SPEED=$(dd if=/dev/zero of=/tmp/speedtest bs=1M count=100 2>&1 | grep -oP '\d+\.?\d* MB/s' | tail -1)
rm -f /tmp/speedtest
echo " Write Speed: $WRITE_SPEED"

# Docker test
echo "🐳 Docker Performance Test:"
time docker run --rm hello-world > /dev/null 2>&1
echo " Docker startup: OK"

echo "βœ… Performance test completato!"

πŸ› Troubleshooting​

ProblemaSintomiSoluzione
Out of MemoryServizi crashano, OOMKilledAumentare RAM o ridurre worker Celery
Docker spaceNo space left on devicedocker system prune -a --volumes
GPU MemoryCUDA out of memoryRidurre batch size o chunk size
Port conflictsAddress already in useVerificare porte in .env, killare processi conflittuali
Slow embeddingsTask timeoutVerificare GPU utilizzo, aumentare worker timeout
Database locksQuery timeoutControllare connessioni attive, restart PostgreSQL
Network issuesServices unreachableVerificare network Docker, restart stack

Diagnostic Commands​

# Verifica risorse sistema
htop
free -h
df -h
nvidia-smi # se GPU disponibile

# Docker diagnostics
docker system df
docker stats
docker compose ps
docker compose logs -f

# Network debugging
docker network ls
netstat -tulpn | grep LISTEN
ss -tulpn | grep LISTEN

# Database connectivity
docker compose exec postgres psql -U postgres -d emblema -c "SELECT version();"
docker compose exec redis redis-cli ping

πŸ“š Riferimenti​

Hardware​

Software Dependencies​

Cloud Providers​

Questa pagina ti Γ¨ stata utile?