Skip to main content

Periodic re-evaluation

Selecting the best embedding model is a major milestone, but it is worth noting that this is not a one-time activity.

The field evolves rapidly, with new models being released regularly that may offer significant improvements. Furthermore, when integrated at system level, the embedding model may behave differently to expectations.

Therefore, consider a periodic, or monitoring for a need to, re-evaluate your embedding model choices:

  • Monitor benchmark leaderboards: Check resources like MTEB to identify promising new models
  • Track performance metrics: If you notice performance degradation in your application, it may trigger a review.
  • Review changing requirements: As your data distribution, languages, or domains change, your original model selection criteria may need updating

When significant changes occur in either your requirements or available models, simply repeat the selection and evaluation process described in this module.

This can ensure consistent, repeatable principles are applied to your embedding model selection process as both your application and the technology landscape evolve.

By treating model selection as an ongoing process rather than a fixed decision, you'll maintain the quality and effectiveness of your AI applications over time.

Questions and feedback

If you have any questions or feedback, let us know in the user forum.