SEO_iamge_renamer_starting_.../packages/worker/prometheus.yml
DustyWalker b198bfe3cf
Some checks failed
CI Pipeline / Setup Dependencies (push) Has been cancelled
CI Pipeline / Check Dependency Updates (push) Has been cancelled
CI Pipeline / Setup Dependencies (pull_request) Has been cancelled
CI Pipeline / Check Dependency Updates (pull_request) Has been cancelled
CI Pipeline / Lint & Format Check (push) Has been cancelled
CI Pipeline / Unit Tests (push) Has been cancelled
CI Pipeline / Integration Tests (push) Has been cancelled
CI Pipeline / Build Application (push) Has been cancelled
CI Pipeline / Docker Build & Test (push) Has been cancelled
CI Pipeline / Security Scan (push) Has been cancelled
CI Pipeline / Deployment Readiness (push) Has been cancelled
CI Pipeline / Lint & Format Check (pull_request) Has been cancelled
CI Pipeline / Unit Tests (pull_request) Has been cancelled
CI Pipeline / Integration Tests (pull_request) Has been cancelled
CI Pipeline / Build Application (pull_request) Has been cancelled
CI Pipeline / Docker Build & Test (pull_request) Has been cancelled
CI Pipeline / Security Scan (pull_request) Has been cancelled
CI Pipeline / Deployment Readiness (pull_request) Has been cancelled
feat(worker): complete production-ready worker service implementation
This commit delivers the complete, production-ready worker service that was identified as missing from the audit. The implementation includes:

## Core Components Implemented:

### 1. Background Job Queue System 
- Progress tracking with Redis and WebSocket broadcasting
- Intelligent retry handler with exponential backoff strategies
- Automated cleanup service with scheduled maintenance
- Queue-specific retry policies and failure handling

### 2. Security Integration 
- Complete ClamAV virus scanning service with real-time threats detection
- File validation and quarantine system
- Security incident logging and user flagging
- Comprehensive threat signature management

### 3. Database Integration 
- Prisma-based database service with connection pooling
- Image status tracking and batch management
- Security incident recording and user flagging
- Health checks and statistics collection

### 4. Monitoring & Observability 
- Prometheus metrics collection for all operations
- Custom business metrics and performance tracking
- Comprehensive health check endpoints (ready/live/detailed)
- Resource usage monitoring and alerting

### 5. Production Docker Configuration 
- Multi-stage Docker build with Alpine Linux
- ClamAV daemon integration and configuration
- Security-hardened container with non-root user
- Health checks and proper signal handling
- Complete docker-compose setup with Redis, MinIO, Prometheus, Grafana

### 6. Configuration & Environment 
- Comprehensive environment validation with Joi
- Redis integration for progress tracking and caching
- Rate limiting and throttling configuration
- Logging configuration with Winston and file rotation

## Technical Specifications Met:

 **Real AI Integration**: OpenAI GPT-4 Vision + Google Cloud Vision with fallbacks
 **Image Processing Pipeline**: Sharp integration with EXIF preservation
 **Storage Integration**: MinIO/S3 with temporary file management
 **Queue Processing**: BullMQ with Redis, retry logic, and progress tracking
 **Security Features**: ClamAV virus scanning with quarantine system
 **Monitoring**: Prometheus metrics, health checks, structured logging
 **Production Ready**: Docker, Kubernetes compatibility, environment validation

## Integration Points:
- Connects with existing API queue system
- Uses shared database models and authentication
- Integrates with infrastructure components
- Provides real-time progress updates via WebSocket

This resolves the critical gap identified in the audit and provides a complete, production-ready worker service capable of processing images with real AI vision analysis at scale.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-08-05 18:37:04 +02:00

31 lines
No EOL
653 B
YAML

global:
scrape_interval: 15s
evaluation_interval: 15s
rule_files:
# - "first_rules.yml"
# - "second_rules.yml"
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
- job_name: 'seo-worker'
static_configs:
- targets: ['worker:9090']
metrics_path: '/metrics'
scrape_interval: 30s
scrape_timeout: 10s
- job_name: 'redis'
static_configs:
- targets: ['redis:6379']
metrics_path: '/metrics'
scrape_interval: 30s
- job_name: 'minio'
static_configs:
- targets: ['minio:9000']
metrics_path: '/minio/v2/metrics/cluster'
scrape_interval: 30s