Apple

Breaking Language Barriers: Apple's New Approach to Enhancing AI's Multilingual Capabilities

Apple co-authors groundbreaking study to enhance AI's language skills in non-English languages, tackling vocabulary and grammar biases.
By Blip Tech 1 min read

Summary

Issue: Non-native English speakers often find that Large Language Models (LLMs) perform much better in English than in their native languages. This disparity can lead to subtle or significant differences, and in some cases, even dangerous bypasses of safety filters, as highlighted by a 2023 Carnegie Mellon study.

Apple's Study: Apple has co-authored a study with researchers from Inria Paris, École Polytechnique, and Sapienza University of Rome to address the English-centric bias in LLMs. The study introduces two new metrics—Lexical Naturalness and Syntactic Naturalness—to evaluate how well models generate text that matches native speaker vocabulary and grammar.

Findings: Even Chinese-developed models like Qwen underperform in both non-English and English languages, while Meta’s Llama 3.1 is the most natural overall but still lags behind human-level output.

Apple's Solution: Apple proposes a method to train models to prefer more natural-sounding outputs by using back-translation to generate negative examples (unnatural patterns). By training the model to distinguish between natural and unnatural responses, they significantly improved vocabulary choice and grammar without degrading overall performance.

#Apple #AI #Research

Latest News

About Blip Tech

Blip Tech is your go-to source for fast, reliable technology news. We cover everything from the latest Apple and Google announcements to breakthroughs in artificial intelligence, new smartphone releases, computer hardware, and everyday tech tips and how-tos. Our mission is to keep you informed without the fluff — just the news you need, delivered clearly and concisely.