structured information right into a RAG system, engineers usually default to embedding uncooked JSON right into a vector database. The fact, nevertheless, is that this intuitive strategy results in dramatically poor efficiency. Fashionable embeddings are primarily based on the BERT structure, which is actually the encoder a part of a Transformer, and are educated on an enormous textual content dataset with the principle aim of capturing semantic which means. Fashionable embedding fashions can present unimaginable retrieval efficiency, however they’re educated on a big set of unstructured textual content with a concentrate on semantic which means. Consequently, though embedding JSON could seem like an intuitively easy and chic resolution, utilizing a generic embedding mannequin for JSON objects would reveal outcomes removed from peak efficiency.
Deep dive
Tokenization
Step one is tokenization, which takes the textual content and splits it into tokens, that are typically a generic a part of the phrase. The fashionable embedding fashions make the most of Byte-Pair Encoding (BPE) or WordPiece tokenization algorithms. These algorithms are optimized for pure language, breaking phrases into widespread sub-components. When a tokenizer encounters uncooked JSON, it struggles with the excessive frequency of non-alphanumeric characters. For instance, "usd": 10, will not be seen as a key-value pair; as a substitute, it’s fragmented:
- The quotes (
"), colon (:), and comma (,) - Tokens
usdand10
This creates a low signal-to-noise ratio. In pure language, nearly all phrases contribute to the semantic “sign”. Whereas in JSON (and different structured codecs), a major share of tokens are “wasted” on structural syntax that comprises zero semantic worth.
Consideration calculation
The core energy of Transformers lies within the consideration mechanism. This permits the mannequin to weight the significance of tokens relative to one another.
Within the sentence The value is 10 US {dollars} or 9 euros, consideration can simply hyperlink the worth 10 to the idea value as a result of these relationships are well-represented within the mannequin’s pre-training information and the mannequin has seen this linguistic sample tens of millions of occasions. Alternatively, within the uncooked JSON:
"value": {
"usd": 10,
"eur": 9,
}
the mannequin encounters structural syntax it was not primarily optimized to “learn”. With out the linguistic connector, the ensuing vector will fail to seize the true intent of the information, because the relationships between the important thing and the worth are obscured by the format itself.
Imply Pooling
The ultimate step in producing a single embedding illustration of the doc is Imply Pooling. Mathematically, the ultimate embedding (E) is the centroid of all token vectors (e1, e2, e3) within the doc:
That is the place the JSON tokens develop into a mathematical legal responsibility. If 25% of the tokens within the doc are structural markers (braces, quotes, colons), the ultimate vector is closely influenced by the “which means” of punctuation. Consequently, the vector is successfully “pulled” away from its true semantic heart within the vector house by these noise tokens. When a person submits a pure language question, the space between the “clear” question vector and “noisy” JSON vector will increase, straight hurting the retrieval metrics.
Flatten it
So now that we all know in regards to the JSON limitations, we have to work out the right way to resolve them. The overall and most easy strategy is to flatten the JSON and convert it into pure language.
Let’s think about the standard product object:
{
"skuId": "123",
"description": "It is a check product used for demonstration functions",
"amount": 5,
"value": {
"usd": 10,
"eur": 9,
},
"availableDiscounts": ["1", "2", "3"],
"giftCardAvailable": "true",
"class": "demo product"
...
}
It is a easy object with some attributes like description, and so on. Let’s apply the tokenization to it and see the way it appears to be like:

Now, let’s convert it into textual content to make the embeddings’ work simpler. To be able to try this, we are able to outline a template and substitute the JSON values into it. For instance, this template may very well be used to explain the product:
Product with SKU {skuId} belongs to the class "{class}"
Description: {description}
It has a amount of {amount} obtainable
The value is {value.usd} US {dollars} or {value.eur} euros
Accessible low cost ids embrace {availableDiscounts as comma-separated checklist}
Present playing cards are {giftCardAvailable ? "obtainable" : "not obtainable"} for this product
So the ultimate consequence will seem like:
Product with SKU 123 belongs to the class "demo product"
Description: It is a check product used for demonstration functions
It has a amount of 5 obtainable
The value is 10 US {dollars} or 9 euros
Accessible low cost ids embrace 1, 2, and three
Present playing cards can be found for this product
And apply tokenizer to it:

Not solely does it have 14% fewer tokens now, however it is also a a lot clearer type with the semantic which means and required context.
Let’s measure the outcomes
Observe: Full, reproducible code for this experiment is obtainable within the Google Colab pocket book
Now let’s attempt to measure retrieval efficiency for each choices. We’re going to concentrate on the usual retrieval metrics like Recall@okay, Precision@okay, and MRR to maintain it easy, and can make the most of a generic embedding mannequin (all-MiniLM-L6-v2) and the Amazon ESCI dataset with random 5,000 queries and three,809 related merchandise.
The all-MiniLM-L6-v2 is a well-liked alternative, which is small (22.7m params) however supplies quick and correct outcomes, making it a good selection for this experiment.
For the dataset, the model of Amazon ESCI is used, particularly milistu/amazon-esci-data (), which is obtainable on Hugging Face and comprises a set of Amazon merchandise and search queries information.
The flattening operate used for textual content conversion is:
def flatten_product(product):
return (
f"Product {product['product_title']} from model {product['product_brand']}"
f" and product id {product['product_id']}"
f" and outline {product['product_description']}"
)
A pattern of the uncooked JSON information is:
{
"product_id": "B07NKPWJMG",
"title": "RoWood 3D Puzzles for Adults, Picket Mechanical Gear Kits for Teenagers Children Age 14+",
"description": " Specs
Mannequin Quantity: Rowood Treasure field LK502
Common construct time: 5 hours
Complete Items: 123
Mannequin weight: 0.69 kg
Field weight: 0.74 KG
Assembled dimension: 100*124*85 mm
Field dimension: 320*235*39 mm
Certificates: EN71,-1,-2,-3,ASTMF963
Beneficial Age Vary: 14+
Contents
Plywood sheets
Steel Spring
Illustrated directions
Equipment
MADE FOR ASSEMBLY
-Observe the directions supplied within the booklet and meeting 3d puzzle with some thrilling and interesting enjoyable. Fell the pleasure of self creation getting this beautiful wood work like a professional.
GLORIFY YOUR LIVING SPACE
-Revive the enigmatic appeal and cheer your events and get-togethers with an expertise that's distinctive and fascinating .
",
"model": "RoWood",
"colour": "Treasure Field"
}
For the vector search, two FAISS indexes are created: one for the flattened textual content and one for the JSON-formatted textual content. Each indexes are flat, which implies that they’ll evaluate distances for every of the present entries as a substitute of using an Approximate Nearest Neighbour (ANN) index. That is essential to make sure that retrieval metrics should not affected by the ANN.
D = 384
index_json = faiss.IndexFlatIP(D)
index_flatten = faiss.IndexFlatIP(D)
To scale back the dataset a random variety of 5,000 queries has been chosen and all corresponding merchandise have been embedded and added to the indexes. Consequently, the collected metrics are as follows:

all-MiniLM-L6-v2 embedding mannequin on the Amazon ESCI dataset. The flattened strategy constantly yields increased scores throughout all key retrieval metrics (Precision@10, Recall@10, and MRR). Picture by creatorAnd the efficiency change of the flattened model is:

The evaluation confirms that embedding uncooked structured information into generic vector house is a suboptimal strategy and including a easy preprocessing step of flattening structured information constantly delivers vital enchancment for retrieval metrics (boosting recall@okay and precision@okay by about 20%). The primary takeaway for engineers constructing RAG programs is that efficient information preparation is extraordinarily essential for reaching peak efficiency of the semantic retrieval/RAG system.
References
[1] Full experiment code https://colab.analysis.google.com/drive/1dTgt6xwmA6CeIKE38lf2cZVahaJNbQB1?usp=sharing
[2] Mannequin https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2
[3] Amazon ESCI dataset. Particular model used: https://huggingface.co/datasets/milistu/amazon-esci-data
The unique dataset obtainable at https://www.amazon.science/code-and-datasets/shopping-queries-dataset-a-large-scale-esci-benchmark-for-improving-product-search
[4] FAISS https://ai.meta.com/instruments/faiss/
