Performance • Concurrency • Hardware Utilization

Parallel Processing: How FlowBatch Uses Your Hardware

Last Updated: December 1, 2025

FlowBatch processes multiple files simultaneously to reduce total processing time. Instead of converting images one at a time, FlowBatch runs several conversions in parallel, using your CPU cores efficiently.

How Parallel Processing Works

When you start a batch job, FlowBatch divides the work across multiple concurrent tasks:

Sequential (Old Way)

File 1: [process] ────────►

File 2:                [process] ────────►

File 3:                               [process] ────────►

Total time = sum of all files

Parallel (Current)

File 1: [process] ────────►

File 2: [process] ────────►

File 3: [process] ────────►

File 4: [process] ────────►

Total time ≈ longest file × (files/concurrency)

Each parallel task handles the complete processing pipeline for one file: HEIC pre-conversion (if needed), FFmpeg processing (resize, convert, watermark), and output file writing.

Current Defaults

FlowBatch automatically determines the concurrency level based on your system:

Concurrency Formula

concurrency = min(CPU cores, 8)

Minimum: 2 | Maximum: 8

4-core CPU4 concurrent tasks
8-core CPU8 concurrent tasks
16-core CPU8 concurrent tasks (capped)
2-core CPU2 concurrent tasks (minimum)

Why Cap at 8?

Beyond 8 concurrent processes, disk I/O becomes the bottleneck rather than CPU. Reading/writing many large image files simultaneously can actually slow down overall throughput due to disk seek times and bandwidth limits.

Performance Impact

The performance improvement depends on the type of processing:

ScenarioSequentialParallel (4 cores)Speedup
100 HEIC → JPG~3-4 min~1 min3-4x faster
100 JPG resize~30 sec~10 sec3x faster
100 PNG → WebP~45 sec~12 sec3-4x faster
Branching (3 outputs)~90 sec~25 sec3-4x faster

Times are approximate and vary based on file sizes, output formats, and hardware.

HEIC Conversion Benefits

HEIC conversion benefits significantly from parallel processing because it involves two steps:

  1. 1. HEIC decode: The heif-dec tool converts HEIC to an intermediate PNG (~1-2 seconds per file)
  2. 2. FFmpeg process: The PNG is then processed through the workflow pipeline

With sequential processing, these steps happen one file at a time. With parallel processing, multiple HEIC decodes and FFmpeg processes run simultaneously, dramatically reducing wait time.

Planned: User-Configurable Settings

Future versions of FlowBatch will include settings to tune parallel processing for your specific hardware and use case:

Concurrency Slider

Adjust the number of concurrent tasks (1-16). Lower values reduce system load; higher values maximize throughput on powerful hardware.

GPU Concurrency Limit

Separate limit for GPU-accelerated encoding. Consumer GPUs typically support 2-3 concurrent encode sessions (NVENC/QuickSync). This setting will prevent "too many concurrent sessions" errors.

Priority Mode

Choose between "Background" (lower priority, won't slow other apps) and "Performance" (higher priority, faster processing but may affect system responsiveness).

Memory Limit

Cap memory usage to prevent system slowdown when processing very large files or many files simultaneously.

Disk I/O Throttling

Limit disk operations per second to prevent I/O bottlenecks on HDDs or network drives.

Interaction with Hardware Acceleration

FlowBatch uses hardware acceleration (NVIDIA NVENC, Intel QuickSync, AMD AMF) when available. Parallel processing and hardware acceleration work together:

CPU-Only Processing

All concurrent tasks use CPU cores. More concurrency = more CPU usage. The automatic limit prevents overloading your system.

GPU-Accelerated Processing

FFmpeg encoding offloads to GPU, but consumer GPUs have session limits. Running too many concurrent GPU encodes will cause failures. Future settings will allow separate GPU concurrency limits.

HEIC Pre-Processing

HEIC decoding (heif-dec) is always CPU-based. This step benefits directly from parallel processing regardless of GPU availability.

Troubleshooting

System becomes unresponsive during processing

This can happen on systems with limited RAM or when processing very large files. Future versions will include memory limits and priority settings. For now, avoid processing other intensive tasks while FlowBatch is running.

Processing seems slower than expected

If you're processing files from/to a slow disk (HDD, network drive, USB 2.0), disk I/O may be the bottleneck. Parallel processing won't help much in this case. Consider using an SSD for input/output folders.

GPU encoding errors with many files

Consumer GPUs limit concurrent encode sessions (typically 2-3 for NVENC). If you see GPU-related errors, the workaround is to process smaller batches. Future versions will add GPU-specific concurrency limits.

Summary

FlowBatch uses parallel processing to significantly reduce batch processing time:

  • • Processes multiple files simultaneously (up to 8 by default)
  • • Automatically scales to your CPU core count
  • • Particularly effective for HEIC conversion (3-4x speedup)
  • • Works alongside hardware acceleration
  • • Future versions will add user-configurable settings

The current automatic settings work well for most systems. Advanced tuning options are planned for users who want more control over resource usage.

Keywords: parallel processing, concurrent batch processing, multi-threaded image conversion, CPU utilization, performance optimization, batch processing speed, HEIC parallel conversion, hardware acceleration