Improvement got from 0.90 to 0.93 going from a out of the box BERT to a finetuned ModernBERT. Comparison with a pout of the box ModernBERT would show the improvement from finetuning
While the paragraph is somewhat vague I would interpret it as fine-tuning both BERT and ModernBERT based on the one before it.
> Fine-tuning answerdotai/ModernBERT-base on ~15,000 synthetic prompts for 5 epochs took 321 seconds and our best model achieved a f1 score of 0.993. I also ran the training with bert-base-uncased to compare the training time and performance. The original BERT achieved a f1 score of 0.99 and took 1048 seconds to train.
“In 2025”? We’re still in 2024.
Improvement got from 0.90 to 0.93 going from a out of the box BERT to a finetuned ModernBERT. Comparison with a pout of the box ModernBERT would show the improvement from finetuning
While the paragraph is somewhat vague I would interpret it as fine-tuning both BERT and ModernBERT based on the one before it.
> Fine-tuning answerdotai/ModernBERT-base on ~15,000 synthetic prompts for 5 epochs took 321 seconds and our best model achieved a f1 score of 0.993. I also ran the training with bert-base-uncased to compare the training time and performance. The original BERT achieved a f1 score of 0.99 and took 1048 seconds to train.