Passa al contenuto principale

Testing e Debugging

Guida per implementare testing e debugging nell'ecosistema Emblema.

Stato Attuale Testing

Attualmente il progetto non ha test implementati. Questa documentazione fornisce le linee guida per implementare una strategia di testing completa quando sarà prioritario.

🧪 Strategia di Testing (Da Implementare)

Testing Pyramid Proposta

    🔺 E2E Tests (Playwright)
🔸🔸 Integration Tests (API + DB)
🔹🔹🔹 Unit Tests (Components + Functions)

Coverage Targets Raccomandati

  • Unit Tests: >80% coverage
  • Integration Tests: Critical user flows
  • E2E Tests: Happy path + edge cases

📋 Piano di Implementazione Testing

Priorità 1: Testing Backend (Python)

  1. Configurare pytest per servizi FastAPI
  2. Test unit per chunkers e handlers
  3. Test integrazione API con database mock
  4. Test Celery tasks con mock

Priorità 2: Testing Frontend (React/Next.js)

  1. Setup Vitest per unit testing
  2. Testing Library per componenti React
  3. MSW per mock API calls
  4. Playwright per E2E testing

Priorità 3: Testing Infrastruttura

  1. Test Docker images con container-structure-test
  2. Test deployment scripts
  3. Health check automatizzati

🎯 Frontend Testing (Esempio Implementazione)

1. Setup Testing Environment

Test Configuration

// vitest.config.ts
import { defineConfig } from "vitest/config";
import react from "@vitejs/plugin-react";
import path from "path";

export default defineConfig({
plugins: [react()],
test: {
environment: "jsdom",
globals: true,
setupFiles: ["./src/test/setup.ts"],
},
resolve: {
alias: {
"@": path.resolve(__dirname, "./src"),
},
},
});
// src/test/setup.ts
import "@testing-library/jest-dom";
import { vi } from "vitest";

// Mock Next.js router
vi.mock("next/navigation", () => ({
useRouter: () => ({
push: vi.fn(),
replace: vi.fn(),
back: vi.fn(),
}),
useSearchParams: () => ({
get: vi.fn(),
}),
usePathname: () => "/test-path",
}));

// Mock next-auth
vi.mock("next-auth/react", () => ({
useSession: () => ({ data: null, status: "unauthenticated" }),
signIn: vi.fn(),
signOut: vi.fn(),
}));

// Mock i18n
vi.mock("@/hooks/use-translation", () => ({
useTranslation: () => ({
t: (key: string, options?: any) =>
options ? `${key}_${JSON.stringify(options)}` : key,
language: "it",
changeLanguage: vi.fn(),
}),
}));

2. Unit Testing Components

Component Testing Pattern

// components/__tests__/document-form.test.tsx
import { render, screen, fireEvent, waitFor } from '@testing-library/react';
import userEvent from '@testing-library/user-event';
import { vi } from 'vitest';
import { DocumentForm } from '../document/form';
import { createDocumentSchema } from '@/schema/document';

// Test wrapper with providers
const TestWrapper = ({ children }: { children: React.ReactNode }) => {
return (
<div data-testid="test-wrapper">
{children}
</div>
);
};

const renderDocumentForm = (props = {}) => {
const defaultProps = {
defaultValues: {},
onSubmit: vi.fn(),
variant: 'page' as const,
};

return render(
<DocumentForm {...defaultProps} {...props} />,
{ wrapper: TestWrapper }
);
};

describe('DocumentForm', () => {
beforeEach(() => {
vi.clearAllMocks();
});

it('renders all form fields', () => {
renderDocumentForm();

expect(screen.getByLabelText(/nome documento/i)).toBeInTheDocument();
expect(screen.getByLabelText(/descrizione/i)).toBeInTheDocument();
expect(screen.getByRole('button', { name: /salva/i })).toBeInTheDocument();
});

it('validates required fields on submit', async () => {
const onSubmit = vi.fn();
renderDocumentForm({ onSubmit });

const submitButton = screen.getByRole('button', { name: /salva/i });
await userEvent.click(submitButton);

await waitFor(() => {
expect(screen.getByText(/nome è richiesto/i)).toBeInTheDocument();
});

expect(onSubmit).not.toHaveBeenCalled();
});

it('submits form with valid data', async () => {
const onSubmit = vi.fn();
renderDocumentForm({ onSubmit });

const nameInput = screen.getByLabelText(/nome documento/i);
const descriptionInput = screen.getByLabelText(/descrizione/i);

await userEvent.type(nameInput, 'Test Document');
await userEvent.type(descriptionInput, 'Test description');

const submitButton = screen.getByRole('button', { name: /salva/i });
await userEvent.click(submitButton);

await waitFor(() => {
expect(onSubmit).toHaveBeenCalledWith({
name: 'Test Document',
description: 'Test description',
});
});
});

it('displays error messages from server', async () => {
const onSubmit = vi.fn().mockRejectedValue(new Error('Server error'));
renderDocumentForm({ onSubmit });

// Fill form and submit
await userEvent.type(
screen.getByLabelText(/nome documento/i),
'Test Document'
);
await userEvent.click(screen.getByRole('button', { name: /salva/i }));

await waitFor(() => {
expect(screen.getByText(/server error/i)).toBeInTheDocument();
});
});

it('resets form when defaultValues change', () => {
const { rerender } = renderDocumentForm({
defaultValues: { name: 'Initial Name' }
});

expect(screen.getByDisplayValue('Initial Name')).toBeInTheDocument();

rerender(
<TestWrapper>
<DocumentForm
defaultValues={{ name: 'Updated Name' }}
onSubmit={vi.fn()}
/>
</TestWrapper>
);

expect(screen.getByDisplayValue('Updated Name')).toBeInTheDocument();
});
});

3. Hook Testing

// hooks/__tests__/use-documents.test.tsx
import { renderHook, waitFor } from "@testing-library/react";
import { vi } from "vitest";
import { useDocuments } from "../use-documents";

// Mock SWR
vi.mock("swr", () => ({
default: vi.fn(),
}));

// Mock fetch
const mockFetch = vi.fn();
global.fetch = mockFetch;

describe("useDocuments", () => {
beforeEach(() => {
vi.clearAllMocks();
mockFetch.mockResolvedValue({
ok: true,
json: () =>
Promise.resolve({
documents: [
{ id: "1", name: "Doc 1", status: "completed" },
{ id: "2", name: "Doc 2", status: "processing" },
],
total: 2,
}),
});
});

it("fetches documents successfully", async () => {
const { result } = renderHook(() => useDocuments());

await waitFor(() => {
expect(result.current.isLoading).toBe(false);
});

expect(result.current.documents).toHaveLength(2);
expect(result.current.total).toBe(2);
expect(result.current.error).toBeUndefined();
});

it("handles fetch errors", async () => {
mockFetch.mockRejectedValue(new Error("Network error"));

const { result } = renderHook(() => useDocuments());

await waitFor(() => {
expect(result.current.error).toBeDefined();
});

expect(result.current.documents).toEqual([]);
});

it("creates new document", async () => {
const { result } = renderHook(() => useDocuments());

const newDocumentData = {
name: "New Document",
description: "Test description",
};

mockFetch.mockResolvedValueOnce({
ok: true,
json: () =>
Promise.resolve({
id: "3",
...newDocumentData,
status: "processing",
}),
});

const newDocument = await result.current.createDocument(newDocumentData);

expect(newDocument).toMatchObject({
id: "3",
name: "New Document",
status: "processing",
});

expect(mockFetch).toHaveBeenCalledWith("/api/v1/documents", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify(newDocumentData),
});
});
});

4. Integration Testing

// __tests__/integration/document-upload.test.tsx
import { render, screen, waitFor } from '@testing-library/react';
import userEvent from '@testing-library/user-event';
import { rest } from 'msw';
import { setupServer } from 'msw/node';
import { DocumentUploadPage } from '@/app/documents/upload/page';

// Setup MSW server
const server = setupServer(
rest.post('/api/v1/documents/upload', (req, res, ctx) => {
return res(ctx.json({
id: 'test-doc-id',
name: 'test-file.pdf',
status: 'processing',
}));
}),

rest.get('/api/v1/documents/test-doc-id/status', (req, res, ctx) => {
return res(ctx.json({
id: 'test-doc-id',
status: 'completed',
chunks: 10,
}));
})
);

beforeAll(() => server.listen());
afterEach(() => server.resetHandlers());
afterAll(() => server.close());

describe('Document Upload Integration', () => {
it('uploads file and shows processing status', async () => {
render(<DocumentUploadPage />);

// Upload file
const fileInput = screen.getByLabelText(/upload file/i);
const file = new File(['test content'], 'test.pdf', { type: 'application/pdf' });

await userEvent.upload(fileInput, file);

// Click upload
const uploadButton = screen.getByRole('button', { name: /upload/i });
await userEvent.click(uploadButton);

// Check processing status
await waitFor(() => {
expect(screen.getByText(/processing/i)).toBeInTheDocument();
});

// Wait for completion
await waitFor(() => {
expect(screen.getByText(/completed/i)).toBeInTheDocument();
}, { timeout: 5000 });
});

it('handles upload errors gracefully', async () => {
server.use(
rest.post('/api/v1/documents/upload', (req, res, ctx) => {
return res(ctx.status(400), ctx.json({
error: 'File too large',
}));
})
);

render(<DocumentUploadPage />);

const fileInput = screen.getByLabelText(/upload file/i);
const largeFile = new File(['x'.repeat(10000000)], 'large.pdf');

await userEvent.upload(fileInput, largeFile);
await userEvent.click(screen.getByRole('button', { name: /upload/i }));

await waitFor(() => {
expect(screen.getByText(/file too large/i)).toBeInTheDocument();
});
});
});

🐍 Backend Testing

1. FastAPI Testing Setup

Test Configuration

# conftest.py
import pytest
import asyncio
from fastapi.testclient import TestClient
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from app.main import app
from app.dependencies import get_db, get_settings
from app.database import Base

# Test database
SQLALCHEMY_DATABASE_URL = "sqlite:///./test.db"
engine = create_engine(SQLALCHEMY_DATABASE_URL, connect_args={"check_same_thread": False})
TestingSessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)

def override_get_db():
try:
db = TestingSessionLocal()
yield db
finally:
db.close()

def override_get_settings():
from app.config import Settings
return Settings(
database_url=SQLALCHEMY_DATABASE_URL,
redis_url="redis://localhost:6379/1", # Test Redis DB
)

# Override dependencies
app.dependency_overrides[get_db] = override_get_db
app.dependency_overrides[get_settings] = override_get_settings

@pytest.fixture(scope="session")
def client():
Base.metadata.create_all(bind=engine)
with TestClient(app) as c:
yield c
Base.metadata.drop_all(bind=engine)

@pytest.fixture
def db_session():
db = TestingSessionLocal()
try:
yield db
finally:
db.close()

@pytest.fixture(autouse=True)
def clean_db():
"""Clean database after each test."""
yield
Base.metadata.drop_all(bind=engine)
Base.metadata.create_all(bind=engine)

2. Unit Testing Services

# tests/test_chunkers.py
import pytest
from app.chunkers.document import DocumentChunker
from app.chunkers.audio import AudioChunker
import tempfile
import os

class TestDocumentChunker:
"""Test document chunking functionality."""

def setup_method(self):
self.chunker = DocumentChunker(chunk_size=100, overlap=20)

def test_chunk_text_content(self):
"""Test basic text chunking."""
content = "This is a test document. " * 50 # Long text

with tempfile.NamedTemporaryFile(mode='w', suffix='.txt', delete=False) as f:
f.write(content)
temp_path = f.name

try:
chunks = self.chunker.chunk_document(temp_path)

# Assertions
assert len(chunks) > 1, "Should create multiple chunks"
assert all(chunk.tokens <= 100 for chunk in chunks), "Chunks should respect token limit"

# Test overlap
if len(chunks) > 1:
first_end_tokens = self.chunker.tokenizer.encode(chunks[0].content)[-10:]
second_start_tokens = self.chunker.tokenizer.encode(chunks[1].content)[:10]

# Should have some token overlap
overlap_count = len(set(first_end_tokens) & set(second_start_tokens))
assert overlap_count > 0, "Should have token overlap between chunks"

finally:
os.unlink(temp_path)

def test_chunk_metadata(self):
"""Test chunk metadata generation."""
content = "Test content for metadata validation."

with tempfile.NamedTemporaryFile(mode='w', suffix='.txt', delete=False) as f:
f.write(content)
temp_path = f.name

try:
metadata = {"document_id": "test-123", "language": "en"}
chunks = self.chunker.chunk_document(temp_path, metadata)

for i, chunk in enumerate(chunks):
assert chunk.metadata["chunk_index"] == i
assert chunk.metadata["document_id"] == "test-123"
assert chunk.metadata["language"] == "en"
assert chunk.metadata["token_count"] > 0
assert "source_file" in chunk.metadata

finally:
os.unlink(temp_path)

@pytest.mark.parametrize("chunk_size,overlap", [
(50, 10),
(200, 50),
(512, 75),
])
def test_different_chunk_sizes(self, chunk_size, overlap):
"""Test chunking with different parameters."""
chunker = DocumentChunker(chunk_size=chunk_size, overlap=overlap)
content = "This is test content. " * 100

with tempfile.NamedTemporaryFile(mode='w', suffix='.txt', delete=False) as f:
f.write(content)
temp_path = f.name

try:
chunks = chunker.chunk_document(temp_path)

for chunk in chunks:
assert chunk.tokens <= chunk_size, f"Chunk exceeds size limit: {chunk.tokens} > {chunk_size}"
assert len(chunk.content.strip()) > 0, "Chunk should not be empty"

finally:
os.unlink(temp_path)

class TestAudioChunker:
"""Test audio chunking with mocked WhisperX."""

def setup_method(self):
self.chunker = AudioChunker(chunk_size=400, overlap=100)

@pytest.fixture
def mock_transcript(self):
"""Mock WhisperX transcript data."""
return [
{"speaker": "SPEAKER_00", "start": 0.0, "end": 5.0, "text": "Hello, this is the first segment."},
{"speaker": "SPEAKER_00", "start": 5.0, "end": 10.0, "text": "Continuing with the same speaker."},
{"speaker": "SPEAKER_01", "start": 10.0, "end": 15.0, "text": "Now a different speaker is talking."},
{"speaker": "SPEAKER_01", "start": 15.0, "end": 20.0, "text": "Speaker one continues here."},
{"speaker": "SPEAKER_00", "start": 20.0, "end": 25.0, "text": "Back to the original speaker."},
]

def test_speaker_based_chunking(self, mock_transcript, monkeypatch):
"""Test chunking based on speaker changes."""
def mock_transcribe(audio_path, language="it"):
return mock_transcript

# Mock the audio handler
import app.handlers.audio
monkeypatch.setattr(
"app.handlers.audio.AudioHandler.transcribe_with_diarization",
mock_transcribe
)

chunks = self.chunker.chunk_document("fake_audio.mp3", {"language": "en"})

# Should create 3 chunks (speaker changes)
assert len(chunks) == 3

# Check speaker grouping
assert "SPEAKER_00" in chunks[0].content
assert "SPEAKER_01" in chunks[1].content
assert "SPEAKER_00" in chunks[2].content

# Check timing metadata
assert chunks[0].metadata["start_time"] == 0.0
assert chunks[0].metadata["end_time"] == 10.0
assert chunks[1].metadata["start_time"] == 10.0
assert chunks[1].metadata["end_time"] == 20.0

3. API Testing

# tests/test_api_documents.py
import pytest
from fastapi.testclient import TestClient
import json
import tempfile
import os

def test_create_document(client: TestClient, db_session):
"""Test document creation endpoint."""
document_data = {
"name": "Test Document",
"description": "Test description",
"knowledge_base_id": "kb-123"
}

response = client.post("/api/v1/documents", json=document_data)

assert response.status_code == 200
data = response.json()
assert data["name"] == "Test Document"
assert data["status"] == "created"
assert "id" in data

def test_upload_document(client: TestClient):
"""Test document upload endpoint."""
# Create test file
test_content = "This is a test document content for upload testing."

with tempfile.NamedTemporaryFile(mode='w', suffix='.txt', delete=False) as f:
f.write(test_content)
temp_path = f.name

try:
with open(temp_path, 'rb') as test_file:
files = {"file": ("test.txt", test_file, "text/plain")}
data = {"knowledge_base_id": "kb-123"}

response = client.post(
"/api/v1/documents/upload",
files=files,
data=data
)

assert response.status_code == 200
result = response.json()
assert result["name"] == "test.txt"
assert result["status"] == "processing"
assert "task_id" in result

finally:
os.unlink(temp_path)

def test_upload_invalid_file(client: TestClient):
"""Test upload validation."""
# Test file too large
large_content = "x" * (10 * 1024 * 1024 + 1) # > 10MB

with tempfile.NamedTemporaryFile(mode='w', suffix='.txt', delete=False) as f:
f.write(large_content)
temp_path = f.name

try:
with open(temp_path, 'rb') as test_file:
files = {"file": ("large.txt", test_file, "text/plain")}

response = client.post("/api/v1/documents/upload", files=files)

assert response.status_code == 400
error = response.json()
assert "too large" in error["detail"].lower()

finally:
os.unlink(temp_path)

def test_get_document(client: TestClient, db_session):
"""Test get document endpoint."""
# First create a document
document_data = {"name": "Test Doc", "description": "Test"}
create_response = client.post("/api/v1/documents", json=document_data)
doc_id = create_response.json()["id"]

# Get the document
response = client.get(f"/api/v1/documents/{doc_id}")

assert response.status_code == 200
data = response.json()
assert data["id"] == doc_id
assert data["name"] == "Test Doc"

def test_get_nonexistent_document(client: TestClient):
"""Test 404 for non-existent document."""
response = client.get("/api/v1/documents/nonexistent-id")

assert response.status_code == 404
assert response.json()["detail"] == "Document not found"

@pytest.mark.parametrize("invalid_data,expected_error", [
({"description": "Missing name"}, "name"),
({"name": ""}, "name"),
({"name": "x" * 300}, "name"), # Too long
])
def test_document_validation(client: TestClient, invalid_data, expected_error):
"""Test input validation."""
response = client.post("/api/v1/documents", json=invalid_data)

assert response.status_code == 422
errors = response.json()["detail"]
assert any(expected_error in str(error) for error in errors)

4. Celery Task Testing

# tests/test_tasks.py
import pytest
from unittest.mock import Mock, patch
from app.tasks import process_document, transcribe_audio
from celery.exceptions import Retry

class TestProcessDocumentTask:
"""Test document processing task."""

@patch('app.handlers.document.DocumentHandler')
@patch('app.handlers.storage.StorageHandler')
@patch('app.handlers.vector_db.VectorDBHandler')
def test_successful_processing(self, mock_vector, mock_storage, mock_doc):
"""Test successful document processing."""
# Setup mocks
mock_doc.return_value.optimize_file.return_value = "optimized.pdf"
mock_doc.return_value.generate_embedding.return_value = [0.1] * 1024

mock_chunks = [
Mock(id="chunk1", content="First chunk", metadata={"page": 1}),
Mock(id="chunk2", content="Second chunk", metadata={"page": 2}),
]

with patch('app.chunkers.get_chunker') as mock_get_chunker:
mock_chunker = Mock()
mock_chunker.chunk_document.return_value = mock_chunks
mock_get_chunker.return_value = mock_chunker

# Execute task
result = process_document.apply(args=[
"doc-123",
{
"chunking_strategy": "recursive",
"chunk_size": 512,
"chunk_overlap": 75
}
])

# Assertions
assert result.successful()
task_result = result.result
assert task_result["status"] == "completed"
assert task_result["document_id"] == "doc-123"
assert task_result["chunks_processed"] == 2

# Verify mock calls
mock_doc.return_value.optimize_file.assert_called_once_with("doc-123")
mock_vector.return_value.store_embeddings.assert_called_once()

@patch('app.handlers.document.DocumentHandler')
def test_processing_failure(self, mock_doc):
"""Test task failure handling."""
# Setup mock to raise exception
mock_doc.return_value.optimize_file.side_effect = Exception("Processing failed")

result = process_document.apply(args=["doc-123", {}])

assert result.failed()
assert "Processing failed" in str(result.result)

@patch('app.handlers.document.DocumentHandler')
def test_task_retry_on_temporary_failure(self, mock_doc):
"""Test task retry mechanism."""
# Mock temporary failure
mock_doc.return_value.optimize_file.side_effect = ConnectionError("Temporary network error")

# Mock the task to test retry
with patch.object(process_document, 'retry') as mock_retry:
mock_retry.side_effect = Retry("Retrying...")

with pytest.raises(Retry):
process_document.apply(args=["doc-123", {}])

mock_retry.assert_called_once()

class TestTranscribeAudioTask:
"""Test audio transcription task."""

@patch('app.handlers.audio.AudioHandler')
def test_successful_transcription(self, mock_audio):
"""Test successful audio transcription."""
# Setup mock transcript
mock_transcript = [
{"speaker": "SPEAKER_00", "start": 0.0, "end": 5.0, "text": "Hello"},
{"speaker": "SPEAKER_01", "start": 5.0, "end": 10.0, "text": "Hi there"},
]

mock_audio.return_value.load_audio.return_value = "audio_data"
mock_audio.return_value.transcribe_with_diarization.return_value = mock_transcript
mock_audio.return_value.process_transcript.return_value = mock_transcript
mock_audio.return_value.get_duration.return_value = 10.0

# Execute task
result = transcribe_audio.apply(args=["audio.mp3", "en"])

# Assertions
assert result.successful()
task_result = result.result
assert task_result["status"] == "completed"
assert task_result["speakers"] == 2
assert task_result["duration"] == 10.0

# Verify audio handler calls
mock_audio.return_value.transcribe_with_diarization.assert_called_once_with(
"audio_data", language="en"
)

🔍 Debugging

1. Frontend Debugging

React DevTools Setup

// Debug component state
const DocumentForm = () => {
const form = useForm();

// Debug values in development
if (process.env.NODE_ENV === 'development') {
console.log('Form state:', {
values: form.getValues(),
errors: form.formState.errors,
isDirty: form.formState.isDirty,
});
}

return (
<Form {...form}>
{/* Form content */}
</Form>
);
};

// Add displayName for React DevTools
DocumentForm.displayName = 'DocumentForm';

Network Debugging

// API debugging wrapper
const apiClient = {
async request(url: string, options: RequestInit = {}) {
const startTime = performance.now();

console.log("🌐 API Request:", {
url,
method: options.method || "GET",
headers: options.headers,
body: options.body,
});

try {
const response = await fetch(url, options);
const endTime = performance.now();

console.log("✅ API Response:", {
url,
status: response.status,
statusText: response.statusText,
duration: `${(endTime - startTime).toFixed(2)}ms`,
});

return response;
} catch (error) {
const endTime = performance.now();

console.error("❌ API Error:", {
url,
error: error.message,
duration: `${(endTime - startTime).toFixed(2)}ms`,
});

throw error;
}
},
};

2. Backend Debugging

Rich Console Debugging

# app/debug.py
import sys
from rich.console import Console
from rich.traceback import install
from rich.logging import RichHandler
import logging

# Install rich traceback handler
install(show_locals=True)

# Setup rich console
console = Console()

# Configure logging with Rich
logging.basicConfig(
level=logging.DEBUG,
format="%(message)s",
datefmt="[%X]",
handlers=[RichHandler(rich_tracebacks=True)]
)

def debug_log(message: str, data: dict = None):
"""Debug logging with rich formatting."""
if data:
console.print(f"🐛 {message}", style="bold blue")
console.print_json(data=data)
else:
console.print(f"🐛 {message}", style="bold blue")

# Usage in code
from app.debug import debug_log

def process_document(doc_id: str):
debug_log("Starting document processing", {"doc_id": doc_id})

try:
# Processing logic
result = heavy_processing(doc_id)
debug_log("Processing completed", {"result": result})
return result
except Exception as e:
debug_log("Processing failed", {"error": str(e), "doc_id": doc_id})
raise

FastAPI Request/Response Debugging

# Debugging middleware
from fastapi import Request, Response
import time
import json

@app.middleware("http")
async def debug_middleware(request: Request, call_next):
start_time = time.time()

# Log request
body = await request.body() if request.method in ["POST", "PUT", "PATCH"] else None

print(f"📥 Request: {request.method} {request.url}")
print(f" Headers: {dict(request.headers)}")
if body:
try:
print(f" Body: {json.loads(body)}")
except:
print(f" Body: {body[:200]}...")

# Process request
response = await call_next(request)

# Log response
process_time = time.time() - start_time
print(f"📤 Response: {response.status_code} ({process_time:.3f}s)")

return response

3. Performance Profiling

Frontend Performance

// Performance monitoring hook
const usePerformanceMonitor = (componentName: string) => {
useEffect(() => {
const startTime = performance.now();

return () => {
const endTime = performance.now();
const renderTime = endTime - startTime;

if (renderTime > 100) { // Log slow renders
console.warn(`🐌 Slow render: ${componentName} took ${renderTime.toFixed(2)}ms`);
}
};
});
};

// Usage in components
const DocumentList = () => {
usePerformanceMonitor('DocumentList');

const documents = useDocuments();

return (
<div>
{documents.map(doc => (
<DocumentCard key={doc.id} document={doc} />
))}
</div>
);
};

// Bundle analysis
const BundleAnalyzer = dynamic(() => import('@next/bundle-analyzer'), {
ssr: false,
});

// Lighthouse CI in GitHub Actions
// .github/workflows/lighthouse.yml
name: Lighthouse CI
on: [push]
jobs:
lhci:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-node@v3
- run: npm install && npm run build
- run: npm install -g @lhci/cli@0.9.x
- run: lhci autorun

Backend Performance

# Performance profiling decorator
import time
import functools
from typing import Callable, Any

def profile_performance(func: Callable) -> Callable:
"""Decorator to profile function performance."""
@functools.wraps(func)
async def wrapper(*args, **kwargs) -> Any:
start_time = time.perf_counter()

try:
result = await func(*args, **kwargs)
return result
finally:
end_time = time.perf_counter()
execution_time = end_time - start_time

logger.info(
f"⚡ Performance: {func.__name__}",
extra={
"function": func.__name__,
"execution_time": execution_time,
"args_count": len(args),
"kwargs_count": len(kwargs),
}
)

# Alert on slow functions
if execution_time > 5.0: # 5 seconds
logger.warning(
f"🐌 Slow function: {func.__name__} took {execution_time:.2f}s"
)

return wrapper

# Usage
@profile_performance
async def process_document(document_id: str):
# Heavy processing
pass

# Memory profiling
import psutil
import os

def log_memory_usage(context: str = ""):
"""Log current memory usage."""
process = psutil.Process(os.getpid())
memory_info = process.memory_info()

logger.info(
f"💾 Memory usage {context}",
extra={
"rss_mb": memory_info.rss / 1024 / 1024,
"vms_mb": memory_info.vms / 1024 / 1024,
"context": context,
}
)

🚀 Test Automation (Da Configurare)

1. GitHub Actions Testing

note

Attualmente solo il workflow di vulnerability scanning è attivo. I workflow di testing dovranno essere aggiunti quando i test saranno implementati.

# .github/workflows/test.yml
name: Tests

on:
push:
branches: [main, develop]
pull_request:
branches: [main, develop]

jobs:
frontend-tests:
runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: "20"
cache: "pnpm"

- name: Install dependencies
run: pnpm install

- name: Run unit tests
run: pnpm test --coverage

- name: Upload coverage
uses: codecov/codecov-action@v3
with:
file: ./coverage/lcov.info

backend-tests:
runs-on: ubuntu-latest

services:
postgres:
image: postgres:15
env:
POSTGRES_PASSWORD: postgres
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5

redis:
image: redis:7
options: >-
--health-cmd "redis-cli ping"
--health-interval 10s
--health-timeout 5s
--health-retries 5

steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v4
with:
python-version: "3.10"

- name: Install uv
run: curl -LsSf https://astral.sh/uv/install.sh | sh

- name: Install dependencies
run: |
cd apps/background-task
uv sync

- name: Run tests
run: |
cd apps/background-task
uv run pytest --cov=app --cov-report=xml
env:
DATABASE_URL: postgresql://postgres:postgres@localhost/test
REDIS_URL: redis://localhost:6379/0

- name: Upload coverage
uses: codecov/codecov-action@v3
with:
file: ./apps/background-task/coverage.xml

e2e-tests:
runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: "20"
cache: "pnpm"

- name: Install dependencies
run: pnpm install

- name: Install Playwright
run: pnpm exec playwright install --with-deps

- name: Start services
run: |
docker compose up -d
sleep 30 # Wait for services to be ready

- name: Run E2E tests
run: pnpm exec playwright test

- name: Upload test results
uses: actions/upload-artifact@v3
if: failure()
with:
name: playwright-report
path: playwright-report/

2. Pre-commit Hooks

# Install pre-commit
pnpm dlx husky install

# Add pre-commit hook
echo "pnpm lint && pnpm type-check && pnpm test" > .husky/pre-commit
chmod +x .husky/pre-commit

# Add commit message validation
echo "pnpm dlx commitlint --edit \$1" > .husky/commit-msg
chmod +x .husky/commit-msg

🛠️ Debugging Attuale

Debug in Sviluppo

Attualmente il debugging si basa su:

  • Frontend: React DevTools, browser console, Next.js error overlay
  • Backend: Python logging, FastAPI automatic docs, print debugging
  • Docker: docker compose logs -f per monitorare servizi

Strumenti Disponibili

# Logs in tempo reale
docker compose logs -f www-emblema
docker compose logs -f background-task

# Debug Python con debugpy
# In apps/background-task/app/main.py
import debugpy
debugpy.listen(("0.0.0.0", 5678))
# Poi connettersi con VS Code debugger

# Debug Next.js
# In .env.local
NEXT_PUBLIC_DEBUG=true

📝 Roadmap Testing

Fase 1: MVP Testing (Q1 2024)

  • Setup pytest per backend services
  • Test critici per chunking e embeddings
  • Test API authentication
  • Basic E2E test per upload documenti

Fase 2: Coverage Expansion (Q2 2024)

  • Frontend component testing
  • Integration test suite
  • Performance benchmarks
  • Load testing con Locust

Fase 3: Full Automation (Q3 2024)

  • CI/CD con test gates
  • Automated regression testing
  • Visual regression testing
  • Security testing automation

🎯 Come Iniziare con i Test

Quando sarà il momento di implementare i test:

  1. Backend First: Iniziare con test per logica business critica
  2. API Testing: Garantire contratti API stabili
  3. E2E Critical Path: Test per flussi utente principali
  4. Progressive Enhancement: Aggiungere test incrementalmente

Ricorda: è meglio avere pochi test affidabili che molti test fragili!

Questa pagina ti è stata utile?