Skip to content

Commit

Permalink
v0.5 updates
Browse files Browse the repository at this point in the history
  • Loading branch information
sonam-pankaj95 committed Jan 10, 2025
1 parent e1c6390 commit 51b6851
Show file tree
Hide file tree
Showing 2 changed files with 14 additions and 11 deletions.
8 changes: 5 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@
<div align="center">

<p align="center">
<b> Inference, ingestion, and indexing – supercharged by Rust 🦀</b>
<b> Inference, Ingestion, and Indexing – supercharged by Rust 🦀</b>
<br />
<a href="https://starlightsearch.github.io/EmbedAnything/references/"><strong>Python docs »</strong></a>
<br />
Expand Down Expand Up @@ -73,9 +73,11 @@ EmbedAnything is a minimalist, highly performant, lightning-fast, lightweight, m

- **Local Embedding** : Works with local embedding models like BERT and JINA
- **ONNX Models**: Works with ONNX models for BERT and ColPali
- **ColPali** : Support for ColPali in GPU version
- **ColPali** : Support for ColPali in GPU version both on ONNX and Candle
- **Splade** : Support for sparse embeddings for hybrid
- **ReRankers** : Support for ReRanking Models for better RAG.
- **ColBERT** : Support for ColBert on ONNX
- **ModernBERT**: Increase your token length to 8K
- **Cloud Embedding Models:**: Supports OpenAI and Cohere.
- **MultiModality** : Works with text sources like PDFs, txt, md, Images JPG and Audio, .WAV
- **Rust** : All the file processing is done in rust for speed and efficiency
Expand Down Expand Up @@ -121,7 +123,7 @@ data = embed_anything.embed_file("file_address", embedder=model, config=config)
| Bert | All Bert based models |
| CLIP | openai/clip-* |
| Whisper| [OpenAI Whisper models](https://huggingface.co/collections/openai/whisper-release-6501bba2cf999715fd953013)|
| ColPali | vidore/colpali-v1.2-merged |
| ColPali | starlight-ai/colpali-v1.2-merged-onnx|
| Colbert | answerdotai/answerai-colbert-small-v1, jinaai/jina-colbert-v2 and more |
| Splade | [Splade Models](https://huggingface.co/collections/naver/splade-667eb6df02c2f3b0c39bd248) and other Splade like models |
| Reranker | [Jina Reranker Models](https://huggingface.co/jinaai/jina-reranker-v2-base-multilingual), Xenova/bge-reranker |
Expand Down
17 changes: 9 additions & 8 deletions docs/blog/posts/v0.5.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
draft: false
date: 2025-1-31
date: 2025-1-10
authors:
- sonam
- akshay
Expand All @@ -12,16 +12,17 @@ We are thrilled to share that EmbedAnything version 0.5 is out now and comprise

The best of all have been support for late-interaction model, both ColPali and ColBERT on onnx.

1. ModernBert Support: Well it made quite a splash, and we were obliged to add it, in the fastest inference engine, embedanything. In addition to being faster and more accurate, ModernBERT also increases context length to 8k tokens (compared to just 512 for most encoders), and is the first encoder-only model that includes a large amount of code in its training data.
2. ColPali- Onnx :  Running the ColPali model directly on a local machine might not always be feasible. To address this, we developed a **quantized version of ColPali**. Find it on our hugging face, link [here](https://huggingface.co/starlight-ai/colpali-v1.2-merged-onnx). You could also run it both on Candle and on ONNX.
3. ColBERT: ColBERT is a *fast* and *accurate* retrieval model, enabling scalable BERT-based search over large text collections in tens of milliseconds.
4. ReRankers: EmbedAnything recently contributed for the support of reranking models to Candle so as to add it in our own library. It can support any kind of reranking models. Precision meets performance! Use reranking models to refine your retrieval results for even greater accuracy.
5. Jina V3: Also contributed to V3 models, for Jina can seamlessly integrate any V3 model.
6. 𝗗𝗢𝗖𝗫 𝗣𝗿𝗼𝗰𝗲𝘀𝘀𝗶𝗻𝗴
1. **ModernBert** Support: Well it made quite a splash, and we were obliged to add it, in the fastest inference engine, embedanything. In addition to being faster and more accurate, ModernBERT also increases context length to 8k tokens (compared to just 512 for most encoders), and is the first encoder-only model that includes a large amount of code in its training data.
2. **ColPali- Onnx** :  Running the ColPali model directly on a local machine might not always be feasible. To address this, we developed a **quantized version of ColPali**. Find it on our hugging face, link [here](https://huggingface.co/starlight-ai/colpali-v1.2-merged-onnx). You could also run it both on Candle and on ONNX.
3. **ColBERT**: ColBERT is a *fast* and *accurate* retrieval model, enabling scalable BERT-based search over large text collections in tens of milliseconds.
4. **ReRankers:** EmbedAnything recently contributed for the support of reranking models to Candle so as to add it in our own library. It can support any kind of reranking models. Precision meets performance! Use reranking models to refine your retrieval results for even greater accuracy.
5. **Jina V3:** Also contributed to V3 models, for Jina can seamlessly integrate any V3 model.
6. **𝗗𝗢𝗖𝗫 𝗣𝗿𝗼𝗰𝗲𝘀𝘀𝗶𝗻𝗴**

Effortlessly extract text from .docx files and convert it into embeddings. Simplify your document workflows like never before!

7. 𝗛𝗧𝗠𝗟 𝗣𝗿𝗼𝗰𝗲𝘀𝘀𝗶𝗻𝗴:
7. **𝗛𝗧𝗠𝗟 𝗣𝗿𝗼𝗰𝗲𝘀𝘀𝗶𝗻𝗴:**

Parsing and embedding HTML documents just got easier!

✅ Extract rich metadata with embeddings
Expand Down

0 comments on commit 51b6851

Please sign in to comment.