Performance • Concurrency • Hardware Utilization
FlowBatch processes multiple files simultaneously to reduce total processing time. Instead of converting images one at a time, FlowBatch runs several conversions in parallel, using your CPU cores efficiently.
When you start a batch job, FlowBatch divides the work across multiple concurrent tasks:
File 1: [process] ────────►
File 2: [process] ────────►
File 3: [process] ────────►
Total time = sum of all files
File 1: [process] ────────►
File 2: [process] ────────►
File 3: [process] ────────►
File 4: [process] ────────►
Total time ≈ longest file × (files/concurrency)
Each parallel task handles the complete processing pipeline for one file: HEIC pre-conversion (if needed), FFmpeg processing (resize, convert, watermark), and output file writing.
FlowBatch automatically determines the concurrency level based on your system:
concurrency = min(CPU cores, 8)Minimum: 2 | Maximum: 8
Beyond 8 concurrent processes, disk I/O becomes the bottleneck rather than CPU. Reading/writing many large image files simultaneously can actually slow down overall throughput due to disk seek times and bandwidth limits.
The performance improvement depends on the type of processing:
| Scenario | Sequential | Parallel (4 cores) | Speedup |
|---|---|---|---|
| 100 HEIC → JPG | ~3-4 min | ~1 min | 3-4x faster |
| 100 JPG resize | ~30 sec | ~10 sec | 3x faster |
| 100 PNG → WebP | ~45 sec | ~12 sec | 3-4x faster |
| Branching (3 outputs) | ~90 sec | ~25 sec | 3-4x faster |
Times are approximate and vary based on file sizes, output formats, and hardware.
HEIC conversion benefits significantly from parallel processing because it involves two steps:
With sequential processing, these steps happen one file at a time. With parallel processing, multiple HEIC decodes and FFmpeg processes run simultaneously, dramatically reducing wait time.
Future versions of FlowBatch will include settings to tune parallel processing for your specific hardware and use case:
Adjust the number of concurrent tasks (1-16). Lower values reduce system load; higher values maximize throughput on powerful hardware.
Separate limit for GPU-accelerated encoding. Consumer GPUs typically support 2-3 concurrent encode sessions (NVENC/QuickSync). This setting will prevent "too many concurrent sessions" errors.
Choose between "Background" (lower priority, won't slow other apps) and "Performance" (higher priority, faster processing but may affect system responsiveness).
Cap memory usage to prevent system slowdown when processing very large files or many files simultaneously.
Limit disk operations per second to prevent I/O bottlenecks on HDDs or network drives.
FlowBatch uses hardware acceleration (NVIDIA NVENC, Intel QuickSync, AMD AMF) when available. Parallel processing and hardware acceleration work together:
All concurrent tasks use CPU cores. More concurrency = more CPU usage. The automatic limit prevents overloading your system.
FFmpeg encoding offloads to GPU, but consumer GPUs have session limits. Running too many concurrent GPU encodes will cause failures. Future settings will allow separate GPU concurrency limits.
HEIC decoding (heif-dec) is always CPU-based. This step benefits directly from parallel processing regardless of GPU availability.
This can happen on systems with limited RAM or when processing very large files. Future versions will include memory limits and priority settings. For now, avoid processing other intensive tasks while FlowBatch is running.
If you're processing files from/to a slow disk (HDD, network drive, USB 2.0), disk I/O may be the bottleneck. Parallel processing won't help much in this case. Consider using an SSD for input/output folders.
Consumer GPUs limit concurrent encode sessions (typically 2-3 for NVENC). If you see GPU-related errors, the workaround is to process smaller batches. Future versions will add GPU-specific concurrency limits.
FlowBatch uses parallel processing to significantly reduce batch processing time:
The current automatic settings work well for most systems. Advanced tuning options are planned for users who want more control over resource usage.
Keywords: parallel processing, concurrent batch processing, multi-threaded image conversion, CPU utilization, performance optimization, batch processing speed, HEIC parallel conversion, hardware acceleration