Detect Encoding Php ((new)) May 2026

// Wrong approach for text encoding: $finfo = finfo_open(FILEINFO_MIME_ENCODING); echo finfo_file($finfo, 'file.txt'); // "us-ascii" or "utf-8" (unreliable) // Better: read content and detect $content = file_get_contents('file.txt'); echo mb_detect_encoding($content);

We’ve all been there. You import a CSV from a client, scrape a legacy website, or process an old text file, and suddenly your output looks like é instead of é . Garbage characters. Mojibake. detect encoding php

$string = "Café"; $encoding = mb_detect_encoding($string); echo $encoding; // UTF-8 (usually) By default, it looks for . You can pass a custom list of encodings: // Wrong approach for text encoding: $finfo =

There’s also a pure-PHP option: combined with mb_* functions gives you a U::toUtf8() method that attempts detection + conversion. What About Files? finfo vs mb_detect_encoding Don't confuse file encoding (how bytes are structured) with MIME content type . Mojibake

PHP gives us tools to handle this, but they aren't magic. Let’s look at how to reliably detect encoding—and when you shouldn't rely on detection at all. PHP’s Multibyte String extension (mbstring) provides mb_detect_encoding() . It scans a string and tries to guess the character set.

function smartEncodingDetect(string $string, array $priorities = ['UTF-8', 'ISO-8859-1', 'Windows-1252']) foreach ($priorities as $encoding) // For UTF-8, validate it strictly if ($encoding === 'UTF-8' && mb_check_encoding($string, 'UTF-8')) return 'UTF-8'; // For others, attempt detection if (mb_detect_encoding($string, $encoding, true) === $encoding) return $encoding; return 'UTF-8'; // safe fallback

For serious work, mb_detect_encoding has limitations. Consider nelexa/encoding or symfony/polyfill-intl-normalizer , but the gold standard is Mozilla’s universalchardet (ported to PHP as jaybizzle/crawler-detect or similar, or use the mbstring strict mode).