NeoBERT: Modernizing Encoder Models for Enhanced Language Understanding
Encoder models like BERT and RoBERTa have long been cornerstones of natural language processing (NLP), powering tasks such as text classification, retrieval, and toxicity detection. However, while decoder-based large language models (LLMs) like GPT and LLaMA have evolved rapidly—incorporating architectural innovations, larger datasets, and extended context windows—encoders have stagnated. Despite their critical role in embedding-dependent […] The post NeoBERT: Modernizing Encoder Models for Enhanced Language Understanding appeared first on MarkTechPost.



Encoder models like BERT and RoBERTa have long been cornerstones of natural language processing (NLP), powering tasks such as text classification, retrieval, and toxicity detection. However, while decoder-based large language models (LLMs) like GPT and LLaMA have evolved rapidly—incorporating architectural innovations, larger datasets, and extended context windows—encoders have stagnated. Despite their critical role in embedding-dependent applications, BERT-family models rely on outdated architectures, limited training data, and short context lengths, leading to suboptimal performance on modern benchmarks. In this paper, the researchers have presented NeoBERT to revitalize encoder design by integrating advancements from decoder models while addressing inherent limitations of existing encoders.
Traditional encoders like BERT and RoBERTa use absolute positional embeddings, Gaussian Error Linear Unit (GELU) activations, and a fixed 512-token context window. While newer models like GTE and CDE improved fine-tuning strategies for tasks like retrieval, they rely on outdated backbone architectures inherited from BERT. These backbones suffer from inefficiencies:
- Architectural Rigidity: Fixed depth-to-width ratios and positional encoding methods limit adaptability to longer sequences.
- Data Scarcity: Pre-training on small datasets (e.g., Wikipedia + BookCorpus) restricts knowledge diversity.
- Context Constraints: Short sequence lengths (512–2,048 tokens) hinder applications requiring long-context understanding.
Recent fine-tuning advancements masked these issues but failed to modernize the core models. For example, GTE’s contrastive learning boosts retrieval performance but cannot compensate for BERT’s obsolete embeddings. NeoBERT addresses these gaps through architectural overhauls, data scaling, and optimized training:
- Architectural Modernization:
- Rotary Position Embeddings (RoPE): Replaces absolute positional embeddings with relative positioning, enabling better generalization to longer sequences. RoPE integrates positional information directly into attention mechanisms, reducing degradation on out-of-distribution lengths.
- Depth-to-Width Optimization: Adjusts layer depth (28 layers) and width (768 dimensions) to balance parameter efficiency and performance, avoiding the “width-inefficiency” of smaller models.
- RMSNorm and SwiGLU: Replaces LayerNorm with RMSNorm for faster computation and adopts SwiGLU activations, enhancing nonlinear modeling while maintaining parameter count.
- Data and Training:
- RefinedWeb Dataset: Trains on 600B tokens (18× larger than RoBERTa’s data), exposing the model to diverse, real-world text.
- Two-Stage Context Extension: First pre-trains on 1,024-token sequences, then fine-tunes on 4,096-token batches using a mix of standard and long-context data. This phased approach mitigates distribution shifts while expanding usable context.
- Efficiency Optimizations:
- FlashAttention and xFormers: Reduces memory overhead for longer sequences.
- AdamW with Cosine Decay: Balances training stability and regularization.Performance and Evaluation
NeoBERT’s improvements are validated across following benchmarks:
- GLUE: Scores 89.0%, matching RoBERTa-large’s performance despite having 100M fewer parameters. Key drivers include the RefinedWeb dataset (+3.6% gain) and scaled model size (+2.9%).
- MTEB: Outperforms GTE, CDE, and jina-embeddings by +4.5% under standardized contrastive fine-tuning, demonstrating superior embedding quality. The evaluation isolates pre-training benefits by applying identical fine-tuning protocols to all models.
- Context Length: NeoBERT4096 achieves stable perplexity on 4,096-token sequences after 50k additional training steps, whereas BERT struggles beyond 512 tokens. Efficiency tests show NeoBERT processes 4,096-token batches 46.7% faster than ModernBERT, despite larger size.
In conclusion, NeoBERT represents a paradigm shift for encoder models, bridging the gap between stagnant architectures and modern LLM advancements. By rethinking depth-to-width ratios, positional encoding, and data scaling, it achieves state-of-the-art performance on GLUE and MTEB while supporting context windows eight times longer than BERT. Its efficiency and open-source availability make it a practical choice for retrieval, classification, and real-world applications requiring robust embeddings. However, reliance on web-scale data introduces biases, necessitating ongoing updates as cleaner datasets emerge. NeoBERT’s success underscores the untapped potential of encoder modernization, setting a roadmap for future research in efficient, scalable language understanding.
Check out the Paper and Model on Hugging Face. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 80k+ ML SubReddit.