Examples
K-Nearest Neighbors
Find the K most similar items using vector embeddings.
Use K-Nearest Neighbors (KNN) to find the most similar items based on embeddings.
Setting Up with Embeddings
from cog.torque import Graph
g = Graph("products")
# Add some products with categories
g.put("laptop", "category", "electronics")
g.put("phone", "category", "electronics")
g.put("shirt", "category", "clothing")
g.put("pants", "category", "clothing")
# Add embeddings (vectors representing product features)
g.put_embedding("laptop", [0.9, 0.1, 0.2, 0.8])
g.put_embedding("phone", [0.85, 0.15, 0.25, 0.75])
g.put_embedding("shirt", [0.1, 0.9, 0.8, 0.2])
g.put_embedding("pants", [0.15, 0.85, 0.75, 0.25])Finding K-Nearest Neighbors
Use k_nearest() to find the K most similar items:
# Find 2 items most similar to "laptop"
g.v().k_nearest("laptop", k=2).all()
# {'result': [{'id': 'laptop'}, {'id': 'phone'}]}Combining with Graph Queries
Chain KNN with other graph operations:
# Find similar items and get their categories
g.v().k_nearest("laptop", k=2).out("category").all()
# {'result': [{'id': 'electronics'}]}
# Find 3 nearest neighbors and tag them
g.v().k_nearest("shirt", k=3).tag("similar").out("category").all()Use Cases
- Product recommendations: Find similar products
- Content discovery: Find related articles or media
- Clustering: Group similar items together
- Anomaly detection: Find items that are dissimilar to others