Requisiti di Sistema
Requisiti hardware e software per lo sviluppo e deployment della piattaforma Emblema.
π Overviewβ
Emblema Γ¨ una piattaforma AI complessa che combina elaborazione di documenti, modelli linguistici, database vettoriali e interfacce web. I requisiti variano in base al tipo di deployment e alle funzionalitΓ utilizzate.
π₯οΈ Requisiti Hardwareβ
Sviluppo Locale (Minimo)β
CPU: 4 core (Intel i5 o AMD Ryzen 5 equivalente)
RAM: 16 GB
Storage: 50 GB SSD disponibili
Network: Connessione internet stabile per Docker images
Sviluppo Locale (Raccomandato)β
CPU: 8 core (Intel i7/i9 o AMD Ryzen 7/9)
RAM: 32 GB
Storage: 100 GB NVMe SSD
GPU: NVIDIA RTX 3060 o superiore (per AI locale)
Network: Banda larga (>50 Mbps)
Production (Minimo)β
CPU: 16 core
RAM: 64 GB
Storage: 500 GB NVMe SSD
Network: 1 Gbps
Load Balancer: Supporto SSL/TLS
Production con AI On-Premise (Requisiti Minimi)β
CPU: 32+ core
RAM: 128+ GB
Storage: 2+ TB NVMe SSD
GPU: 2x NVIDIA RTX ADA 6000 (96GB VRAM totali)
Network: 10 Gbps
Backup: Sistema backup automatico
OS: Ubuntu 22.04 LTS (SOLO SUPPORTATO)
Considerazioni GPU per AIβ
# Verifica compatibilitΓ GPU
nvidia-smi
# Requisiti NVIDIA OBBLIGATORI per Production:
# - GPU: 2x NVIDIA RTX ADA 6000 (48GB VRAM ciascuna)
# - VRAM Totale: 96GB (minimo assoluto)
# - Driver: β₯525.60.13 (per ADA 6000)
# - CUDA: β₯12.0 (per supporto ADA architecture)
# - Docker: nvidia-container-toolkit β₯1.14.0
# - OS: Ubuntu 22.04 LTS ESCLUSIVAMENTE
GPU Supportate per Production:
- NVIDIA RTX ADA 6000: 48GB VRAM x 2 = 96GB - MINIMO OBBLIGATORIO
- NVIDIA RTX ADA 8000: 48GB VRAM x 2 = 96GB - Alternativa equivalente
- NVIDIA H100: 80GB VRAM x 2 = 160GB - Configurazione superiore
GPU Supportate per Development:
- NVIDIA RTX 4090: 24GB VRAM - Per test limitati
- NVIDIA RTX 3080/3090: 10-24GB VRAM - Solo per sviluppo componenti singoli
πΏ Software di Baseβ
Sistema Operativoβ
Supporto OS Limitato
PRODUCTION: Solo Ubuntu 22.04 LTS Γ¨ supportato per deployment on-premise con AI completa.
Ubuntu 22.04 LTS (OBBLIGATORIO per Production)β
- Ubuntu: 22.04 LTS - UNICO OS SUPPORTATO per on-premise AI
- Kernel: β₯5.15.0 (incluso in Ubuntu 22.04)
- Architettura: x86_64 obbligatoria
- Driver NVIDIA: β₯525.60.13 per ADA 6000
- CUDA Toolkit: β₯12.0
Development (Limitato)β
- Ubuntu: 20.04 LTS, 22.04 LTS - Per sviluppo componenti
- macOS: 11.0+ - Solo per frontend development
- Windows: 10/11 Pro con WSL2 - Solo per frontend development
Limitazioni Development
Su macOS e Windows Γ¨ possibile sviluppare solo componenti frontend. L'AI stack completo richiede Ubuntu 22.04 con GPU ADA 6000.
Containerizzazioneβ
# Docker (Obbligatorio)
Docker Engine: β₯20.10
Docker Compose: β₯2.20 (con sintassi include)
Docker Desktop: Latest (Windows/Mac)
# Verifica versioni
docker --version
docker compose version
# NVIDIA Container Toolkit (per GPU)
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit
sudo systemctl restart docker
Development Toolsβ
# Node.js e Package Manager
Node.js: β₯20.11.0 LTS
pnpm: β₯8.10.0
npm: β₯10.0.0 (fallback)
# Verifica installazione
node --version
pnpm --version
# Python (per servizi backend)
Python: β₯3.10, <3.12
uv: Latest (package manager)
pip: β₯23.0 (fallback)
# Verifica installazione Python
python --version
uv --version
# Git
Git: β₯2.30
GitHub CLI: Latest (opzionale ma raccomandato)
# Verifica Git
git --version
gh --version
π§ Dipendenze Specificheβ
Frontend Developmentβ
{
"engines": {
"node": ">=20.11.0",
"pnpm": ">=8.10.0"
},
"devDependencies": {
"turbo": "^1.x.x",
"typescript": "^5.x.x",
"@types/node": "^20.x.x",
"eslint": "^8.x.x",
"prettier": "^3.x.x"
}
}
IDE Extensions raccomandati per VS Code:
- TypeScript and JavaScript Language Features
- ES7+ React/Redux/React-Native snippets
- Tailwind CSS IntelliSense
- ESLint
- Prettier
- Auto Rename Tag
- Bracket Pair Colorizer
Backend Development (Python)β
# pyproject.toml minimo per ogni servizio
[project]
requires-python = ">=3.10,<3.12"
dependencies = [
"fastapi>=0.104.0",
"uvicorn>=0.24.0",
"pydantic>=2.4.0",
"sqlalchemy>=2.0.0",
"celery>=5.3.0",
"redis>=5.0.0",
]
[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
[tool.uv]
dev-dependencies = [
"pytest>=7.4.0",
"httpx>=0.25.0",
"pytest-asyncio>=0.21.0",
]
AI/ML Dependenciesβ
# Sistema (Ubuntu/Debian)
sudo apt-get update && sudo apt-get install -y \
build-essential \
cmake \
libgl1-mesa-glx \
libglib2.0-0 \
libsm6 \
libxext6 \
libxrender-dev \
libgomp1 \
libmagic1 \
poppler-utils \
libreoffice \
pandoc \
tesseract-ocr \
tesseract-ocr-ita \
ffmpeg \
imagemagick
# Python AI/ML packages
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
pip install transformers>=4.35.0
pip install sentence-transformers>=2.2.2
pip install FlagEmbedding>=1.2.0 # BGE-M3
pip install whisperx>=3.1.0
pip install pymilvus>=2.3.0
Database Requirementsβ
# PostgreSQL
Version: β₯15.0
Extensions:
- uuid-ossp
- pg_trgm
- btree_gin
Storage: β₯100GB per environment
# Redis
Version: β₯7.0
Memory: β₯8GB per environment
Persistence: AOF + RDB
# Milvus (Vector Database)
Version: β₯2.3.0
Storage: β₯200GB per environment
Memory: β₯16GB allocated
π Network e Porteβ
Porte di Defaultβ
# Frontend
www-emblema: 3000
doc-emblema: 3001
# Backend Services
background-task: 8000
document-render: 8094
mcp-demo: 8095
novu-bridge: 8096
# Infrastructure
postgresql: 5432
redis: 6379
milvus: 19530
minio: 9000, 9001
keycloak: 8180
hasura: 8080
traefik: 80, 443
grafana: 3030
prometheus: 9090
flower: 5555
litellm: 4000
Firewall Configurationβ
# Porte da aprire (production)
# HTTP/HTTPS
ufw allow 80/tcp
ufw allow 443/tcp
# SSH (cambiare porta default)
ufw allow 2222/tcp
# Monitoring (limitare per IP)
ufw allow from 10.0.0.0/8 to any port 3030 # Grafana
ufw allow from 10.0.0.0/8 to any port 9090 # Prometheus
# Database (solo rete interna)
# Non esporre PostgreSQL, Redis, Milvus pubblicamente
βοΈ Cloud Requirementsβ
Docker Images Storageβ
# Registry space necessario
Total images: ~15-20 GB (all services)
Base images: ~8 GB (Node.js, Python, CUDA)
Application images: ~10-12 GB
# Bandwidth per deployment completo
Download: ~20 GB (prima volta)
Incremental: ~500 MB - 2 GB (aggiornamenti)
Resource Scaling Guidelinesβ
Requisiti GPU Minimi
Per qualsiasi deployment on-premise sono OBBLIGATORIE minimo 2x NVIDIA RTX ADA 6000 (96GB VRAM totali).
| Concurrent Users | CPU Cores | RAM (GB) | Storage (GB) | GPU Setup Minimo |
|---|---|---|---|---|
| 1-50 | 32 | 128 | 2000 | 2x ADA 6000 (96GB) |
| 50-200 | 64 | 256 | 5000 | 2x ADA 6000 (96GB) |
| 200-500 | 128 | 512 | 10000 | 4x ADA 6000 (192GB) |
| 500-1000 | 256 | 1024 | 20000 | 6x ADA 6000 (288GB) |
| 1000+ | 512+ | 2048+ | 50000+ | 8x ADA 6000+ (384GB+) |
π§ͺ Testing dei Requisitiβ
Verifica Ambiente Baseβ
#!/bin/bash
# scripts/check-requirements.sh
echo "π Verificando requisiti di sistema..."
# Check Docker
if command -v docker &> /dev/null; then
echo "β
Docker: $(docker --version)"
else
echo "β Docker non trovato"
exit 1
fi
# Check Docker Compose
if docker compose version &> /dev/null; then
echo "β
Docker Compose: $(docker compose version --short)"
else
echo "β Docker Compose non supporta sintassi v2"
exit 1
fi
# Check Node.js
if command -v node &> /dev/null; then
NODE_VERSION=$(node --version | cut -d'v' -f2)
if [ "$(printf '%s\n' "20.11.0" "$NODE_VERSION" | sort -V | head -n1)" = "20.11.0" ]; then
echo "β
Node.js: v$NODE_VERSION"
else
echo "β οΈ Node.js: v$NODE_VERSION (< 20.11.0 raccomandato)"
fi
else
echo "β Node.js non trovato"
fi
# Check pnpm
if command -v pnpm &> /dev/null; then
echo "β
pnpm: $(pnpm --version)"
else
echo "β pnpm non trovato"
fi
# Check Python
if command -v python3 &> /dev/null; then
PYTHON_VERSION=$(python3 --version | cut -d' ' -f2)
echo "β
Python: $PYTHON_VERSION"
else
echo "β Python non trovato"
fi
# Check uv
if command -v uv &> /dev/null; then
echo "β
uv: $(uv --version)"
else
echo "β οΈ uv non trovato (raccomandato per Python)"
fi
# Check GPU (OBBLIGATORIO per production)
if command -v nvidia-smi &> /dev/null; then
echo "β
NVIDIA GPU disponibili:"
nvidia-smi --query-gpu=name,memory.total --format=csv,noheader,nounits
# Verifica se ci sono ADA 6000
ADA_COUNT=$(nvidia-smi --query-gpu=name --format=csv,noheader,nounits | grep -c "ADA 6000" || echo "0")
TOTAL_VRAM=$(nvidia-smi --query-gpu=memory.total --format=csv,noheader,nounits | awk '{sum += $1} END {print sum}')
echo "GPU ADA 6000 rilevate: $ADA_COUNT"
echo "VRAM totale: ${TOTAL_VRAM}MB"
if [ "$ADA_COUNT" -ge 2 ] && [ "$TOTAL_VRAM" -ge 98304 ]; then # 96GB = 98304MB
echo "β
Requisiti GPU per production SODDISFATTI"
else
echo "β Requisiti GPU per production NON SODDISFATTI"
echo " Necessarie: 2x NVIDIA RTX ADA 6000 (96GB VRAM totali)"
fi
else
echo "β Nessuna GPU NVIDIA rilevata - OBBLIGATORIA per on-premise AI"
fi
# Check available disk space
AVAILABLE_SPACE=$(df -h . | awk 'NR==2 {print $4}')
echo "πΎ Spazio disponibile: $AVAILABLE_SPACE"
# Check memory
TOTAL_MEM=$(free -h | awk 'NR==2{print $2}')
echo "π§ RAM totale: $TOTAL_MEM"
echo "β
Verifica completata!"
Test Performanceβ
#!/bin/bash
# scripts/performance-test.sh
echo "β‘ Testing performance di base..."
# CPU benchmark
echo "π₯ CPU Test (30 secondi):"
timeout 30s yes > /dev/null &
CPULOAD_PID=$!
sleep 2
TOP_CPU=$(top -bn1 | grep "yes" | awk '{print $9}')
kill $CPULOAD_PID 2>/dev/null
echo " CPU utilizzabile: ~${TOP_CPU}%"
# Memory test
echo "π§ Memory Test:"
AVAILABLE_MEM=$(free -m | awk 'NR==2{printf "%.1f", $7/1024}')
echo " RAM disponibile: ${AVAILABLE_MEM}GB"
# Disk I/O test
echo "πΏ Disk I/O Test:"
WRITE_SPEED=$(dd if=/dev/zero of=/tmp/speedtest bs=1M count=100 2>&1 | grep -oP '\d+\.?\d* MB/s' | tail -1)
rm -f /tmp/speedtest
echo " Write Speed: $WRITE_SPEED"
# Docker test
echo "π³ Docker Performance Test:"
time docker run --rm hello-world > /dev/null 2>&1
echo " Docker startup: OK"
echo "β
Performance test completato!"
π Troubleshootingβ
| Problema | Sintomi | Soluzione |
|---|---|---|
| Out of Memory | Servizi crashano, OOMKilled | Aumentare RAM o ridurre worker Celery |
| Docker space | No space left on device | docker system prune -a --volumes |
| GPU Memory | CUDA out of memory | Ridurre batch size o chunk size |
| Port conflicts | Address already in use | Verificare porte in .env, killare processi conflittuali |
| Slow embeddings | Task timeout | Verificare GPU utilizzo, aumentare worker timeout |
| Database locks | Query timeout | Controllare connessioni attive, restart PostgreSQL |
| Network issues | Services unreachable | Verificare network Docker, restart stack |
Diagnostic Commandsβ
# Verifica risorse sistema
htop
free -h
df -h
nvidia-smi # se GPU disponibile
# Docker diagnostics
docker system df
docker stats
docker compose ps
docker compose logs -f
# Network debugging
docker network ls
netstat -tulpn | grep LISTEN
ss -tulpn | grep LISTEN
# Database connectivity
docker compose exec postgres psql -U postgres -d emblema -c "SELECT version();"
docker compose exec redis redis-cli ping