Passa al contenuto principale

File Upload API

La File Upload API di Emblema supporta l'upload affidabile di file di qualsiasi dimensione attraverso:

  • Single Upload - Upload diretto per file piccoli (fino a 50MB)
  • Chunked Upload - Upload a pezzi per file grandi (oltre 50MB)
  • Resume Support - Ripresa automatica di upload interrotti
  • Integrity Checks - Validazione tramite hash MD5/SHA256
  • Parallel Processing - Upload simultaneo di più chunk

Overview

Flusso Upload Semplice

Loading diagram...

Flusso Upload Chunked

Loading diagram...

Endpoint 1: Single File Upload

POST /api/v1/file/upload

Upload diretto per file fino a 50MB.

Headers

NomeTipoRichiestoDescrizione
AuthorizationstringBearer token JWT
Content-Typestringmultipart/form-data
Content-LengthnumberDimensione totale richiesta

Request Body (FormData)

CampoTipoRichiestoDescrizione
fileFileFile da caricare

Response

Success (200 OK)
{
"fileId": "550e8400-e29b-41d4-a716-446655440000"
}
Errori
CodiceDescrizioneAction
400No file providedIncludere file nel form
401UnauthorizedFornire token JWT valido
413File too largeUtilizzare chunked upload
415Unsupported media typeVerificare tipo file

Esempio cURL

curl -X POST https://your-domain.com/api/v1/file/upload \
-H "Authorization: Bearer $TOKEN" \
-F "file=@document.pdf"

Esempio JavaScript

const uploadFile = async (file) => {
const formData = new FormData();
formData.append("file", file);

const response = await fetch("/api/v1/file/upload", {
method: "POST",
headers: {
Authorization: `Bearer ${token}`,
},
body: formData,
});

if (!response.ok) {
throw new Error(`Upload failed: ${response.status}`);
}

const result = await response.json();
return result.fileId;
};

// Utilizzo
const fileId = await uploadFile(selectedFile);

Endpoint 2: Chunked Upload Flow

Per file superiori a 50MB, utilizzare il flusso chunked upload.

2.1 Initialize Upload

POST /api/v1/file/upload/init

Crea una nuova sessione di upload chunked.

Headers
NomeTipoRichiestoDescrizione
AuthorizationstringBearer token JWT
Content-Typestringapplication/json
Request Body
interface InitUploadRequest {
fileName: string; // Nome del file
fileSize: number; // Dimensione totale in bytes
mimeType: string; // Tipo MIME del file
chunkSize?: number; // Dimensione chunk (default: 10MB)
fileHash?: string; // Hash MD5 del file (opzionale)
}
Esempio Request
{
"fileName": "large-document.pdf",
"fileSize": 104857600,
"mimeType": "application/pdf",
"chunkSize": 10485760,
"fileHash": "d41d8cd98f00b204e9800998ecf8427e"
}
Response
{
"uploadId": "upload-550e8400-e29b-41d4-a716-446655440000",
"chunkSize": 10485760,
"totalChunks": 10,
"expiresAt": "2024-01-16T10:00:00Z"
}

2.2 Upload Chunk

POST /api/v1/file/upload/chunk

Carica un singolo chunk del file.

Headers
NomeTipoRichiestoDescrizione
AuthorizationstringBearer token JWT
Content-Typestringmultipart/form-data
Request Body (FormData)
CampoTipoRichiestoDescrizione
uploadIdstringID sessione upload
chunkIndexnumberIndice chunk (0-based)
chunkFile/BlobDati del chunk
chunkHashstringHash SHA256 del chunk
Response
{
"success": true,
"chunkIndex": 0,
"uploadedChunks": 1,
"totalChunks": 10,
"progress": 10.0
}

2.3 Complete Upload

POST /api/v1/file/upload/complete

Assembla tutti i chunk e crea il file finale.

Headers
NomeTipoRichiestoDescrizione
AuthorizationstringBearer token JWT
Content-Typestringapplication/json
Request Body
{
"uploadId": "upload-550e8400-e29b-41d4-a716-446655440000",
"finalHash": "sha256hash-of-complete-file"
}
Response
{
"success": true,
"fileId": "550e8400-e29b-41d4-a716-446655440000",
"message": "Upload completed successfully",
"fileSize": 104857600,
"processedAt": "2024-01-15T10:30:00Z"
}

Endpoint 3: Utility Endpoints

3.1 Check Resumable Uploads

POST /api/v1/file/upload/check

Trova upload resumable esistenti per un file.

Request Body
{
"fileName": "large-document.pdf",
"fileSize": 104857600,
"fileHash": "d41d8cd98f00b204e9800998ecf8427e"
}
Response
{
"hasResumableUpload": true,
"uploads": [
{
"uploadId": "upload-550e8400-e29b-41d4-a716-446655440000",
"fileName": "large-document.pdf",
"fileSize": 104857600,
"uploadedChunks": 7,
"totalChunks": 10,
"progress": 70.0,
"expiresAt": "2024-01-16T10:00:00Z",
"createdAt": "2024-01-15T09:00:00Z",
"missingChunks": [7, 8, 9]
}
]
}

3.2 Validate Chunk

POST /api/v1/file/upload/validate-chunk

Verifica se un chunk è già stato caricato correttamente.

Request Body
{
"uploadId": "upload-550e8400-e29b-41d4-a716-446655440000",
"chunkIndex": 5,
"chunkHash": "sha256hash-of-chunk"
}
Response
{
"valid": true,
"exists": true,
"uploadId": "upload-550e8400-e29b-41d4-a716-446655440000",
"chunkIndex": 5,
"serverHash": "sha256hash-of-chunk",
"uploadedAt": "2024-01-15T10:15:00Z",
"size": 10485760
}

3.3 Resume Upload

POST /api/v1/file/upload/resume

Riprende un upload chunked esistente.

Request Body
{
"uploadId": "upload-550e8400-e29b-41d4-a716-446655440000"
}
Response
{
"success": true,
"uploadId": "upload-550e8400-e29b-41d4-a716-446655440000",
"fileName": "large-document.pdf",
"totalChunks": 10,
"uploadedChunks": 7,
"missingChunks": [7, 8, 9],
"progress": 70.0,
"expiresAt": "2024-01-16T10:00:00Z"
}

Configurazione & Limiti

Variabili di Ambiente

# Dimensione massima file (default: 1GB)
MAX_FILE_SIZE=1073741824

# Bucket MinIO per storage
MINIO_BUCKET=emblema-files

# Directory temporanea per chunk
UPLOAD_TEMP_DIR=/tmp/uploads

# Scadenza sessioni upload (default: 24h)
UPLOAD_SESSION_EXPIRY=86400

Limiti per Tipo di File

Tipo FileEstensioniLimite SingoloLimite Chunked
Documents.pdf, .docx, .txt, .md50MB10GB
Images.jpg, .png, .gif, .webp20MB500MB
Audio.mp3, .wav, .m4a, .ogg100MB5GB
Video.mp4, .avi, .mov, .webm200MB10GB
Archives.zip, .tar.gz, .7z100MB5GB

Performance Guidelines

Dimensione FileMetodo ConsigliatoChunk SizeParallel Chunks
< 50MBSingle UploadN/AN/A
50MB - 500MBChunked Upload10MB3
500MB - 2GBChunked Upload20MB5
> 2GBChunked Upload50MB3

Codici di Errore

Errori Comuni

CodiceCostanteDescrizioneSoluzione
400NO_FILE_PROVIDEDNessun file fornitoIncludere file nel form
400MISSING_REQUIRED_FIELDSCampi obbligatori mancantiVerificare parametri richiesti
401UNAUTHORIZEDToken non validoRinnovare token JWT
403FILE_TYPE_NOT_ALLOWEDTipo file non supportatoUtilizzare formato supportato
413FILE_TOO_LARGEFile troppo grandeUsare chunked upload
413CHUNK_TOO_LARGEChunk troppo grandeRidurre dimensione chunk

Errori Chunked Upload

CodiceCostanteDescrizioneSoluzione
400INVALID_UPLOAD_SESSIONSessione upload non validaIniziare nuova sessione
400UPLOAD_SESSION_EXPIREDSessione scadutaCreare nuova sessione
400INVALID_CHUNK_INDEXIndice chunk non validoVerificare range 0-totalChunks
400INCOMPLETE_UPLOADUpload incompletoCaricare tutti i chunk
400VALIDATION_FAILEDValidazione chunk fallitaRicaricare chunk con hash corretto
500INTEGRITY_CHECK_FAILEDControllo integrità fallitoRiavviare upload completo

Esempi di Implementazione

1. Upload Semplice con Progress

const uploadWithProgress = async (file, onProgress) => {
return new Promise((resolve, reject) => {
const xhr = new XMLHttpRequest();
const formData = new FormData();
formData.append("file", file);

// Progress tracking
xhr.upload.addEventListener("progress", (e) => {
if (e.lengthComputable) {
const progress = (e.loaded / e.total) * 100;
onProgress(progress);
}
});

xhr.addEventListener("load", () => {
if (xhr.status === 200) {
const result = JSON.parse(xhr.responseText);
resolve(result.fileId);
} else {
reject(new Error(`Upload failed: ${xhr.status}`));
}
});

xhr.addEventListener("error", () => reject(new Error("Upload failed")));

xhr.open("POST", "/api/v1/file/upload");
xhr.setRequestHeader("Authorization", `Bearer ${token}`);
xhr.send(formData);
});
};

// Utilizzo
await uploadWithProgress(file, (progress) => {
console.log(`Upload progress: ${progress.toFixed(1)}%`);
});

2. Chunked Upload Completo

class ChunkedUploader {
constructor(file, options = {}) {
this.file = file;
this.chunkSize = options.chunkSize || 10 * 1024 * 1024; // 10MB default
this.maxRetries = options.maxRetries || 3;
this.parallelUploads = options.parallelUploads || 3;
this.onProgress = options.onProgress || (() => {});
}

async upload() {
// 1. Check for resumable uploads
const resumable = await this.checkResumable();
let uploadId,
totalChunks,
uploadedChunks = 0;

if (resumable.hasResumableUpload) {
const upload = resumable.uploads[0];
uploadId = upload.uploadId;
totalChunks = upload.totalChunks;
uploadedChunks = upload.uploadedChunks;
} else {
// 2. Initialize new upload
const init = await this.initUpload();
uploadId = init.uploadId;
totalChunks = init.totalChunks;
}

// 3. Upload missing chunks
const missingChunks = await this.getMissingChunks(uploadId, totalChunks);
await this.uploadChunks(uploadId, missingChunks);

// 4. Complete upload
const result = await this.completeUpload(uploadId);
return result.fileId;
}

async checkResumable() {
const response = await fetch("/api/v1/file/upload/check", {
method: "POST",
headers: {
Authorization: `Bearer ${token}`,
"Content-Type": "application/json",
},
body: JSON.stringify({
fileName: this.file.name,
fileSize: this.file.size,
fileHash: await this.calculateHash(),
}),
});
return response.json();
}

async initUpload() {
const response = await fetch("/api/v1/file/upload/init", {
method: "POST",
headers: {
Authorization: `Bearer ${token}`,
"Content-Type": "application/json",
},
body: JSON.stringify({
fileName: this.file.name,
fileSize: this.file.size,
mimeType: this.file.type,
chunkSize: this.chunkSize,
fileHash: await this.calculateHash(),
}),
});
return response.json();
}

async uploadChunks(uploadId, missingChunks) {
const totalChunks = Math.ceil(this.file.size / this.chunkSize);
let completed = totalChunks - missingChunks.length;

// Upload chunks in parallel batches
for (let i = 0; i < missingChunks.length; i += this.parallelUploads) {
const batch = missingChunks.slice(i, i + this.parallelUploads);

await Promise.all(
batch.map(async (chunkIndex) => {
const chunk = this.getChunk(chunkIndex);
await this.uploadChunk(uploadId, chunkIndex, chunk);
completed++;
this.onProgress((completed / totalChunks) * 100);
}),
);
}
}

async uploadChunk(uploadId, chunkIndex, chunk, retries = 0) {
try {
const formData = new FormData();
formData.append("uploadId", uploadId);
formData.append("chunkIndex", chunkIndex.toString());
formData.append("chunk", chunk);
formData.append("chunkHash", await this.calculateChunkHash(chunk));

const response = await fetch("/api/v1/file/upload/chunk", {
method: "POST",
headers: {
Authorization: `Bearer ${token}`,
},
body: formData,
});

if (!response.ok) {
throw new Error(`Chunk upload failed: ${response.status}`);
}

return response.json();
} catch (error) {
if (retries < this.maxRetries) {
await this.delay(1000 * Math.pow(2, retries)); // Exponential backoff
return this.uploadChunk(uploadId, chunkIndex, chunk, retries + 1);
}
throw error;
}
}

async completeUpload(uploadId) {
const response = await fetch("/api/v1/file/upload/complete", {
method: "POST",
headers: {
Authorization: `Bearer ${token}`,
"Content-Type": "application/json",
},
body: JSON.stringify({
uploadId,
finalHash: await this.calculateHash(),
}),
});
return response.json();
}

getChunk(chunkIndex) {
const start = chunkIndex * this.chunkSize;
const end = Math.min(start + this.chunkSize, this.file.size);
return this.file.slice(start, end);
}

async calculateHash() {
// Calculate MD5 hash of entire file
const arrayBuffer = await this.file.arrayBuffer();
const hashBuffer = await crypto.subtle.digest("MD5", arrayBuffer);
return Array.from(new Uint8Array(hashBuffer))
.map((b) => b.toString(16).padStart(2, "0"))
.join("");
}

async calculateChunkHash(chunk) {
// Calculate SHA256 hash of chunk
const arrayBuffer = await chunk.arrayBuffer();
const hashBuffer = await crypto.subtle.digest("SHA-256", arrayBuffer);
return Array.from(new Uint8Array(hashBuffer))
.map((b) => b.toString(16).padStart(2, "0"))
.join("");
}

delay(ms) {
return new Promise((resolve) => setTimeout(resolve, ms));
}

async getMissingChunks(uploadId, totalChunks) {
const response = await fetch("/api/v1/file/upload/resume", {
method: "POST",
headers: {
Authorization: `Bearer ${token}`,
"Content-Type": "application/json",
},
body: JSON.stringify({ uploadId }),
});
const result = await response.json();
return result.missingChunks || [];
}
}

// Utilizzo
const uploader = new ChunkedUploader(largeFile, {
chunkSize: 20 * 1024 * 1024, // 20MB chunks
onProgress: (progress) => {
console.log(`Upload progress: ${progress.toFixed(1)}%`);
updateProgressBar(progress);
},
});

try {
const fileId = await uploader.upload();
console.log("Upload completed:", fileId);
} catch (error) {
console.error("Upload failed:", error);
}

3. Python Chunked Upload Client

import hashlib
import requests
import math
from typing import Optional, Callable

class ChunkedUploader:
def __init__(self, file_path: str, token: str, base_url: str = 'https://your-domain.com/api/v1'):
self.file_path = file_path
self.token = token
self.base_url = base_url
self.chunk_size = 10 * 1024 * 1024 # 10MB
self.headers = {
'Authorization': f'Bearer {token}'
}

def upload(self, progress_callback: Optional[Callable[[float], None]] = None) -> str:
"""Upload file using chunked upload"""
# Calculate file hash
file_hash = self._calculate_file_hash()
file_size = self._get_file_size()
file_name = self.file_path.split('/')[-1]

# Check for resumable uploads
resumable = self._check_resumable(file_name, file_size, file_hash)

if resumable.get('hasResumableUpload'):
upload = resumable['uploads'][0]
upload_id = upload['uploadId']
total_chunks = upload['totalChunks']
missing_chunks = upload.get('missingChunks', [])
else:
# Initialize new upload
init_response = self._init_upload(file_name, file_size, file_hash)
upload_id = init_response['uploadId']
total_chunks = init_response['totalChunks']
missing_chunks = list(range(total_chunks))

# Upload missing chunks
self._upload_chunks(upload_id, missing_chunks, progress_callback)

# Complete upload
result = self._complete_upload(upload_id, file_hash)
return result['fileId']

def _check_resumable(self, file_name: str, file_size: int, file_hash: str) -> dict:
response = requests.post(
f'{self.base_url}/file/upload/check',
headers={**self.headers, 'Content-Type': 'application/json'},
json={
'fileName': file_name,
'fileSize': file_size,
'fileHash': file_hash
}
)
return response.json()

def _init_upload(self, file_name: str, file_size: int, file_hash: str) -> dict:
response = requests.post(
f'{self.base_url}/file/upload/init',
headers={**self.headers, 'Content-Type': 'application/json'},
json={
'fileName': file_name,
'fileSize': file_size,
'mimeType': self._get_mime_type(file_name),
'chunkSize': self.chunk_size,
'fileHash': file_hash
}
)
return response.json()

def _upload_chunks(self, upload_id: str, missing_chunks: list, progress_callback: Optional[Callable]):
total_chunks = len(missing_chunks)

with open(self.file_path, 'rb') as f:
for i, chunk_index in enumerate(missing_chunks):
chunk_data = self._read_chunk(f, chunk_index)
chunk_hash = self._calculate_chunk_hash(chunk_data)

self._upload_chunk(upload_id, chunk_index, chunk_data, chunk_hash)

if progress_callback:
progress = ((i + 1) / total_chunks) * 100
progress_callback(progress)

def _upload_chunk(self, upload_id: str, chunk_index: int, chunk_data: bytes, chunk_hash: str):
files = {
'chunk': ('chunk', chunk_data, 'application/octet-stream')
}
data = {
'uploadId': upload_id,
'chunkIndex': str(chunk_index),
'chunkHash': chunk_hash
}

response = requests.post(
f'{self.base_url}/file/upload/chunk',
headers=self.headers,
files=files,
data=data
)

if not response.ok:
raise Exception(f'Chunk upload failed: {response.status_code}')

return response.json()

def _complete_upload(self, upload_id: str, file_hash: str) -> dict:
response = requests.post(
f'{self.base_url}/file/upload/complete',
headers={**self.headers, 'Content-Type': 'application/json'},
json={
'uploadId': upload_id,
'finalHash': file_hash
}
)
return response.json()

def _calculate_file_hash(self) -> str:
"""Calculate MD5 hash of entire file"""
hasher = hashlib.md5()
with open(self.file_path, 'rb') as f:
for chunk in iter(lambda: f.read(4096), b""):
hasher.update(chunk)
return hasher.hexdigest()

def _calculate_chunk_hash(self, chunk_data: bytes) -> str:
"""Calculate SHA256 hash of chunk"""
return hashlib.sha256(chunk_data).hexdigest()

def _get_file_size(self) -> int:
import os
return os.path.getsize(self.file_path)

def _read_chunk(self, file_obj, chunk_index: int) -> bytes:
file_obj.seek(chunk_index * self.chunk_size)
return file_obj.read(self.chunk_size)

def _get_mime_type(self, file_name: str) -> str:
import mimetypes
mime_type, _ = mimetypes.guess_type(file_name)
return mime_type or 'application/octet-stream'

# Utilizzo
def progress_callback(progress):
print(f'Upload progress: {progress:.1f}%')

uploader = ChunkedUploader('/path/to/large/file.pdf', 'your-jwt-token')
try:
file_id = uploader.upload(progress_callback)
print(f'Upload completed: {file_id}')
except Exception as e:
print(f'Upload failed: {e}')

Best Practices

Performance

  1. Scegli chunk size appropriata - 10-50MB per file grandi
  2. Upload parallelo - Max 3-5 chunk simultanei
  3. Compressione - Comprimi file quando possibile
  4. Retry logic - Implementa retry con exponential backoff

Reliability

  1. Hash validation - Sempre calcolare e verificare hash
  2. Resume support - Controlla upload esistenti prima di iniziare
  3. Progress tracking - Fornisci feedback all'utente
  4. Error handling - Gestisci tutti i possibili errori

Security

  1. Validate file types - Controlla estensioni e MIME type
  2. Scan for malware - Implementa scansione antivirus
  3. Size limits - Rispetta limiti di dimensione
  4. User quotas - Implementa limiti per utente

UX

  1. Progress indicators - Mostra progresso in tempo reale
  2. Cancel support - Permetti cancellazione upload
  3. Background uploads - Continua upload in background
  4. Offline support - Gestisci disconnessioni di rete

Questa pagina ti è stata utile?