Tech Deep Dive
l 5min

WER vs. CER: How to Measure Arabic ASR Accuracy

Performance
Author
Shameed Sait

Key Takeaways

1

Word Error Rate (WER) and Character Error Rate (CER) are the two standard metrics for measuring the accuracy of Automatic Speech Recognition (ASR).

2

WER is a flawed metric for Arabic because of the language’s complex morphology (multiple words combined into one) and dialectal diversity, which leads to inconsistent and misleading scores.

3

CER is a more reliable metric for Arabic because it is not affected by word tokenization differences and provides a more stable measure of performance across systems.

4

Key challenges in evaluating Arabic ASR accuracy include word segmentation (clitics), the lack of vowels in written text (diacritics), and multiple valid dialectal synonyms for the same word.

How is accuracy measured in Automatic Speech Recognition (ASR)? The two most common metrics are Word Error Rate (WER) and Character Error Rate (CER). For a language like English, these metrics are relatively straightforward. For Arabic, they are a minefield of linguistic complexity.

Choosing an ASR vendor based on a single, misleading accuracy score can lead to deploying a system that fails in the real world. This article breaks down what WER and CER are, explains why standard metrics fall short for the Arabic language, and provides a framework for a more intelligent and accurate assessment of Arabic ASR performance.

The Mechanics of Measurement: WER and CER

At their core, both WER and CER are based on the Levenshtein distance, a formula that calculates the minimum number of edits required to change one sequence into another. The formula is:

Error Rate = (Substitutions + Deletions + Insertions) / Total Number of Units

  • Substitutions (S): A word/character is replaced (e.g., reference is "thus," ASR output is "this").
  • Deletions (D): A word/character is missed (e.g., reference is "this is a test," ASR output is "is a test").
  • Insertions (I): A word/character is added (e.g., reference is "this is a test," ASR output is "this is a the test").

The only difference is the unit of measurement: WER uses words, and CER uses characters. A lower score is better.

Metric Unit of Measurement Strengths & Weaknesses
Word Error Rate (WER) Word Strengths: Intuitively understood.
Weaknesses: Unreliable for morphologically rich languages like Arabic.
Character Error Rate (CER) Character Strengths: More robust for complex languages, independent of word tokenization.
Weaknesses: Does not distinguish between minor and major word errors.

This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.

The Arabic Challenge: Why Standard WER Fails

Applying WER to Arabic is not a simple matter of translation. The language’s unique structure presents three fundamental challenges that can distort accuracy measurements.

1. Morphological Richness (The "One Word, Five Meanings" Problem)

Arabic is a morphologically rich language. Words are typically formed from a three-letter root that is combined with various patterns to create different meanings. Furthermore, Arabic uses a variety of clitics, which are functional particles like prepositions, conjunctions, and pronouns that attach to the beginning or end of a word. For example, the single written word "وسيكتبونها" (wasayaktubūnahā) translates to "and they will write it." This single token in Arabic corresponds to five distinct words in English.

This structure creates a significant ambiguity in word segmentation. 

  • Should "وسيكتبونها" be treated as one word or as multiple morphemes? 

Different ASR systems and annotation standards may adopt different tokenization schemes. An ASR system that separates clitics will produce a different word count from one that does not, leading to inconsistent WER calculations. A system might correctly identify all the component morphemes but still be heavily penalized by WER if the reference transcription treats the entire token as a single word.

2. The Diacritics Dilemma (The Vowel Blind Spot)

Standard written Arabic is typically undiacritized, meaning it omits the short vowel marks essential for pronunciation. 

The word "كتب" can be read as 

  • kataba (he wrote), 
  • kutiba (it was written), 
  • or kutub (books). 

An ASR system must predict the correct vowels to generate an accurate phonetic representation.This creates a mismatch. 

If the reference text is undiacritized, the ASR system isn’t evaluated on its ability to produce the correct vowels. 

If the reference is diacritized, any vowel error is penalized, even if the core consonants are correct and the meaning is intelligible.

3. Dialectal Variation (The "Which 'Now' Do You Mean?" Problem)

The Arab world is characterized by diglossia, the coexistence of Modern Standard Arabic (MSA) with dozens of regional dialects. A spoken utterance may have multiple valid transcriptions. For example, the concept of "now" can be:

  • al-ʾān (MSA)
  • dilwaʾti (Egyptian)
  • hallaʾ (Levantine)

If an ASR system outputs a valid dialectal synonym that is different from the one in the reference text, WER will mark it as an error. This penalizes the system for being correct, just in a different dialect.

This is some text inside of a div block.

Heading

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

The Case for CER in Arabic ASR Evaluation

Given the limitations of WER, there is a growing consensus that Character Error Rate (CER) is a more reliable metric for Arabic ASR. Research has shown that CER correlates more closely with human judgments of transcription quality for morphologically complex languages [1].

CER is less sensitive to the word segmentation problem. Because it operates at the character level, it is not affected by whether clitics are attached to or separated from the stem word. While not perfect, it still struggles with the diacritics issue and doesn’t differentiate between the severity of errors, it avoids the major tokenization pitfalls that make WER so unreliable for Arabic

How to Properly Evaluate an Arabic ASR Vendor

To get a meaningful assessment of an Arabic ASR system, you need to go beyond a single headline number. Here are four best practices:

  1. Report Both WER and CER: While CER may be more robust, WER remains an intuitive metric. Reporting both provides a more complete picture of system performance. The discrepancy between WER and CER can itself be an indicator of the morphological complexity of the test data.
  2. Specify Normalization and Tokenization: Any published results must be accompanied by a detailed description of the pre-processing steps applied to both the reference and hypothesis texts. This includes the tokenization scheme (e.g., separating clitics), the handling of diacritics (e.g., stripping them), and the normalization of characters (e.g., unifying different forms of the letter alif).
  3. Use Morpheme-Based Evaluation: For a more linguistically sound evaluation, consider decomposing words into their constituent morphemes before calculating the error rate. This provides a more granular assessment of performance and rewards systems that correctly identify morphemes even if the full word form is incorrect.
  4. Account for Dialectal Variation: Whenever possible, use evaluation datasets that include multiple valid reference transcriptions to account for dialectal synonyms. If this is not feasible, performance should be reported separately for different dialects to avoid penalizing systems for dialect-specific accuracy.

See how Munsit performs on real Arabic speech

Evaluate dialect coverage, noise handling, and in-region deployment on data that reflects your customers.
Explore

Moving Beyond Flawed Metrics

Measuring the accuracy of Arabic speech recognition is not a solved problem. The standard metrics of WER and CER are blunt instruments when applied to a language of such intricacy. A one-size-fits-all approach is insufficient.

A shift towards CER as the primary metric, supplemented by detailed reporting of data processing and a move towards more linguistically-aware evaluation methods, is essential for driving meaningful progress. By acknowledging these challenges, enterprises can better evaluate ASR solutions and choose a partner that understands the linguistic realities of the Arab world.

FAQ

What is a good WER for Arabic ASR?
Why is WER still used for Arabic if it’s so flawed?
Is CER a perfect metric for Arabic?

Powering the Future with AI

Join our newsletter for insights on cutting-edge technology built in the UAE
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.