Krunkit
JPEG
Quality
Compression

JPEG Quality Settings Explained: What the Numbers Actually Mean

A deep technical guide to JPEG quality settings — quantization tables, perceptual quality metrics (SSIM, PSNR), chroma subsampling, progressive JPEG, and why MozJPEG produces smaller files at the same quality.

Krunkit Team··13 min read

Every image tool has a JPEG quality slider, usually ranging from 1 to 100. Most developers know that higher numbers mean better quality and larger files. But what do these numbers actually control? Why does quality 80 look nearly identical to quality 95 while being half the file size? And why do two different encoders at "quality 80" produce different results?

Understanding what happens inside JPEG compression helps you make better decisions about quality settings — and explains why tools like MozJPEG can produce smaller files without visible quality loss.

The JPEG Compression Pipeline

When you save a JPEG, your image goes through a specific sequence of transformations:

  1. Color space conversion: RGB pixels are converted to YCbCr (luminance + two chrominance channels).
  2. Chroma subsampling: The chrominance channels are optionally downsampled (reduced in resolution).
  3. Block splitting: The image is divided into 8x8 pixel blocks.
  4. DCT (Discrete Cosine Transform): Each block is transformed from pixel values to frequency coefficients.
  5. Quantization: Frequency coefficients are divided by values from a quantization table and rounded. This is where data is permanently lost.
  6. Entropy coding: The quantized coefficients are compressed using Huffman coding (or arithmetic coding).

The quality slider controls step 5 — quantization. Everything else in the pipeline is either lossless or controlled by separate settings.

Quantization: Where Quality Happens

The DCT step transforms each 8x8 block from 64 pixel brightness values into 64 frequency coefficients. Low-frequency coefficients represent the average color and broad gradients. High-frequency coefficients represent fine detail and sharp edges.

The quantization step divides each coefficient by a corresponding value from an 8x8 quantization table, then rounds to the nearest integer. Large divisors in the quantization table mean more aggressive rounding, which means more data loss but smaller files.

The Standard Quantization Table

The JPEG standard (ITU-T T.81) defines a suggested luminance quantization table for quality 50:

16  11  10  16  24  40  51  61
12  12  14  19  26  58  60  55
14  13  16  24  40  57  69  56
14  17  22  29  51  87  80  62
18  22  37  56  68 109 103  77
24  35  55  64  81 104 113  92
49  64  78  87 103 121 120 101
72  92  95  98 112 100 103  99

The top-left value (16) applies to the DC coefficient (average brightness of the block). The bottom-right values (99-121) apply to the highest-frequency coefficients. Higher values mean more aggressive quantization — more data loss in those frequency components.

How Quality Maps to Quantization

When you set quality to 80, the encoder scales this base table. The exact formula varies by encoder, but the general approach is:

  • Quality 50: Uses the base table as-is.
  • Quality > 50: Scales table values down (less aggressive quantization, higher quality). Quality 75 roughly halves the base table values. Quality 95 divides them by about 10.
  • Quality < 50: Scales table values up (more aggressive quantization, lower quality). Quality 25 roughly doubles the base table values.

This is why the quality scale is nonlinear. The jump from quality 90 to 95 changes the quantization table significantly more than the jump from 50 to 55. And the jump from 95 to 100 makes the quantization values so small that almost no data is lost — producing near-lossless output at enormous file sizes.

Why Quality 100 Is Not Lossless

Even at quality 100, JPEG is still lossy. The quantization table values become very small (mostly 1s and 2s), but rounding still occurs. Additionally, the color space conversion from RGB to YCbCr involves floating-point math with inherent rounding errors, and chroma subsampling (if enabled) permanently discards chrominance data.

If you need truly lossless output, use PNG, WebP lossless, or AVIF lossless.

The Diminishing Returns Curve

Here's how file size and visual quality relate across the quality range, using a typical 2000x1500 photograph:

| Quality | File Size | Relative Size | Visual Assessment | |---------|-----------|---------------|-------------------| | 100 | 2,850 KB | 100% | Indistinguishable from source | | 95 | 1,420 KB | 50% | Indistinguishable from source | | 90 | 860 KB | 30% | Indistinguishable for most viewers | | 85 | 620 KB | 22% | Excellent; slight softening on extreme zoom | | 80 | 485 KB | 17% | Very good; optimal for most web use | | 70 | 340 KB | 12% | Good; some detail loss in textures | | 60 | 265 KB | 9% | Acceptable; visible artifacts in gradients | | 50 | 210 KB | 7% | Noticeable quality loss; blocking visible | | 30 | 145 KB | 5% | Poor; significant artifacts |

The sweet spot for web delivery is typically quality 75-85. You get 70-80% file size reduction compared to quality 100, with quality loss that's imperceptible to most viewers under normal viewing conditions.

Perceptual Quality Metrics: SSIM and PSNR

"Quality" in the JPEG slider is not the same as perceived visual quality. Two images at "quality 80" from different encoders may look different because they use different quantization tables and optimization strategies.

To objectively measure how close a compressed image is to the original, we use perceptual quality metrics.

PSNR (Peak Signal-to-Noise Ratio)

PSNR measures the mathematical difference between the original and compressed image in decibels. Higher is better.

| PSNR | Quality Assessment | |------|-------------------| | > 40 dB | Excellent; differences invisible | | 35-40 dB | Very good; differences barely perceptible | | 30-35 dB | Good; some differences visible on close inspection | | 25-30 dB | Moderate; differences clearly visible | | < 25 dB | Poor; significant degradation |

PSNR's weakness is that it treats all pixel differences equally. A 1-pixel shift along a sharp edge (invisible to humans) gets the same penalty as a color shift in a smooth gradient (very visible to humans).

SSIM (Structural Similarity Index)

SSIM is a more perceptually accurate metric. It compares images based on luminance, contrast, and structural patterns — factors that align better with human visual perception. SSIM ranges from 0 to 1, where 1 means identical.

| SSIM | Quality Assessment | |------|-------------------| | > 0.98 | Visually lossless; no perceptible difference | | 0.95-0.98 | Excellent; differences only visible in A/B comparison | | 0.90-0.95 | Good; minor differences visible | | 0.85-0.90 | Acceptable; noticeable quality loss | | < 0.85 | Poor; obvious degradation |

SSIM is the preferred metric for evaluating image compression quality. When comparing encoders or quality settings, SSIM gives the most reliable prediction of what humans will actually perceive.

Comparing at Equal SSIM

When you compare encoders at the same SSIM value (equal perceived quality), the differences in file size become clear:

| Target SSIM | Standard JPEG | MozJPEG | File Size Difference | |-------------|---------------|---------|---------------------| | 0.98 | 520 KB | 445 KB | MozJPEG 14% smaller | | 0.96 | 380 KB | 318 KB | MozJPEG 16% smaller | | 0.94 | 290 KB | 241 KB | MozJPEG 17% smaller | | 0.92 | 228 KB | 192 KB | MozJPEG 16% smaller |

At the same visual quality, MozJPEG consistently produces files 14-17% smaller than the standard JPEG encoder. This isn't magic — it's the result of better optimization in the quantization and entropy coding stages.

Chroma Subsampling

Remember step 2 in the pipeline: chroma subsampling. This is a separate quality lever from the quality slider, and it has a significant impact on both file size and quality.

Human vision is more sensitive to changes in brightness (luminance) than changes in color (chrominance). Chroma subsampling exploits this by storing color information at lower resolution than brightness information.

Common Subsampling Modes

| Mode | Description | File Size Impact | |------|-------------|-----------------| | 4:4:4 | Full color resolution. No subsampling. | Baseline (largest) | | 4:2:2 | Color resolution halved horizontally. | ~15-20% smaller | | 4:2:0 | Color resolution halved both horizontally and vertically. | ~25-35% smaller |

Most JPEG encoders default to 4:2:0 subsampling below a certain quality threshold (often quality 90-95) and switch to 4:4:4 above it.

When Chroma Subsampling Matters

For photographs of natural scenes, 4:2:0 is usually fine — the human eye can't distinguish the color resolution loss at normal viewing sizes.

For images with:

  • Fine colored text (red text on white background)
  • Sharp color transitions (colored lines, diagrams)
  • Saturated color detail (colored thread in fabric close-ups)

...4:2:0 subsampling can produce visible color fringing. For these cases, use 4:4:4 (or better yet, use PNG/WebP where chroma subsampling isn't an issue).

Progressive JPEG

Standard (baseline) JPEG loads top-to-bottom: the first row of pixels appears first, then the second row, and so on. Progressive JPEG loads in multiple passes: first a blurry version of the entire image, then progressively sharper versions.

How Progressive JPEG Works

A progressive JPEG stores the DCT coefficients in multiple scans instead of one:

  1. Scan 1: Only the DC coefficients (average color per 8x8 block). This renders as a very blurry, blocky preview — like an 8x downscaled version.
  2. Scan 2-3: Lower AC coefficients are added. The image becomes recognizable but still soft.
  3. Final scans: Remaining high-frequency AC coefficients fill in the fine detail.

Progressive vs Baseline: File Size

Progressive JPEG is typically 2-5% smaller than baseline JPEG for images larger than about 10 KB. This is because the multiple-scan organization allows the entropy coder to work more efficiently on similar coefficient ranges grouped together.

For very small images (thumbnails under 10 KB), baseline can actually be slightly smaller because the overhead of the scan headers outweighs the compression benefit.

Progressive vs Baseline: Perceived Performance

Progressive JPEG provides a better perceived loading experience on slow connections. The user sees a complete (blurry) image almost immediately, rather than watching it load row by row. On fast connections, the difference is imperceptible because the entire image loads in milliseconds regardless.

When to Use Progressive JPEG

  • Images larger than 10 KB (which is nearly all content images)
  • Pages targeting slower connections (mobile users, emerging markets)
  • Hero images where showing something immediately is better than showing nothing

MozJPEG produces progressive JPEG by default.

MozJPEG: Why It Produces Smaller Files

MozJPEG is Mozilla's optimized JPEG encoder, based on libjpeg-turbo with additional compression improvements. It's fully JPEG-compatible — any standard JPEG decoder can read MozJPEG output.

MozJPEG achieves 10-15% smaller file sizes through several techniques:

Trellis Quantization

Standard JPEG quantization simply rounds each DCT coefficient to the nearest integer after division. Trellis quantization considers the overall cost of different rounding choices — sometimes rounding a coefficient up instead of down produces a slightly worse individual value but creates a pattern that compresses better overall.

This is like solving a global optimization problem instead of making greedy local decisions. The visual quality stays the same or improves, but the entropy of the quantized coefficients decreases, producing smaller files.

Optimized Huffman Tables

Standard JPEG uses generic Huffman tables defined in the JPEG specification. MozJPEG generates custom Huffman tables optimized for the specific image being encoded. This alone can save 5-8% in file size.

Scan Optimization for Progressive JPEG

MozJPEG carefully optimizes how DCT coefficients are distributed across progressive scans. The scan structure affects compression efficiency, and MozJPEG's scan optimization can save an additional 2-4%.

Practical Impact

For web developers, MozJPEG is effectively a free performance improvement. You set the same quality level you'd use with a standard encoder, and the output is 10-15% smaller with identical or better visual quality.

Krunkit uses MozJPEG compiled to WebAssembly for its JPEG compression. When you compress a JPEG on Krunkit, you're getting MozJPEG's optimized output without needing to install anything or run a command-line tool.

Practical Quality Setting Recommendations

By Content Type

| Content Type | Recommended Quality | Reasoning | |-------------|-------------------|-----------| | Hero photographs | 80-85 | Prominent, viewed at large size | | Content/article photos | 75-80 | Inline images, moderate size on screen | | Thumbnails | 65-75 | Small display size hides artifacts | | Product photos (e-commerce) | 80-85 | Quality impacts purchase confidence | | User-uploaded photos | 75-80 | Balance quality and storage costs | | Social media previews | 70-75 | Platforms often recompress anyway | | Email images | 70-80 | File size matters for deliverability |

Finding Your Optimal Quality

Rather than guessing, find the right quality empirically:

  1. Take 5-10 representative images from your site.
  2. Encode each at quality 60, 70, 75, 80, 85, and 90.
  3. View the results at actual display size on both desktop and mobile.
  4. Identify the lowest quality where you can't distinguish the compressed version from the original.
  5. Use that quality level (or one step above for safety margin) across your site.

This takes 30 minutes and gives you a data-driven quality setting instead of an arbitrary number.

The Quality 80 Default

Quality 80 has become the de facto default for web JPEGs, and for good reason. At quality 80 with MozJPEG:

  • File sizes are typically 70-80% smaller than quality 100
  • SSIM is typically above 0.96 (excellent perceived quality)
  • Compression artifacts are invisible at normal viewing distances on screens up to about 27 inches
  • It works reasonably well across all content types

If you're unsure what quality to use, 80 with MozJPEG is a solid starting point. You can adjust up or down based on visual inspection of your specific content.

Key Takeaways

  1. The quality number controls quantization table scaling. It's not a percentage of the original quality — it's a parameter that determines how aggressively frequency data is rounded.

  2. The quality scale is nonlinear. Going from 95 to 100 doubles file size for nearly invisible improvement. Going from 85 to 80 saves significant space with minimal quality loss.

  3. Use SSIM, not PSNR, to evaluate quality. SSIM correlates better with human perception. Aim for SSIM > 0.95 for web content.

  4. Chroma subsampling is a separate quality lever. 4:2:0 is fine for photographs; use 4:4:4 for images with fine color detail or sharp colored edges.

  5. Progressive JPEG is almost always better for web delivery — slightly smaller files and better perceived loading performance.

  6. MozJPEG gives you 10-15% free compression over standard JPEG encoders at the same visual quality. There's no reason not to use it.

  7. Quality 75-85 is the sweet spot for web images. Test with your actual content, but this range works well for the vast majority of use cases.