Back to blog
AI & Machine Learning

Building AI-Powered Applications with Next.js and Vector Search

Learn how to integrate vector search and AI into your Next.js applications for intelligent, context-aware features. A practical guide with real-world examples.

A
Ammly
Software Engineer @ Safaricom
October 24, 2025
4 min read
Next.js
AI
Vector Search
Python
PostgreSQL
Share:
Building AI-Powered Applications with Next.js and Vector Search

Building AI-Powered Applications with Next.js and Vector Search

Vector search is revolutionizing how we build intelligent applications. In this guide, I'll show you how to integrate vector search with Next.js to create AI-powered features that understand context and meaning.

What is Vector Search?

Vector search, also known as semantic search, goes beyond simple keyword matching. It understands the meaning behind queries and returns relevant results even when exact keywords don't match.

# Example: Converting text to vectors
from openai import OpenAI

client = OpenAI()

def get_embedding(text: str) -> list[float]:
    """Convert text to vector embedding"""
    response = client.embeddings.create(
        model="text-embedding-3-small",
        input=text
    )
    return response.data[0].embedding

Why Use Vector Search?

Traditional keyword search has limitations:

  • Can't understand synonyms or related concepts
  • Struggles with typos and variations
  • Doesn't capture semantic meaning

Vector search solves these problems by:

  1. Understanding context - Knows "ML" and "Machine Learning" are the same
  2. Finding similar content - Returns relevant results without exact matches
  3. Supporting multiple languages - Works across language barriers

Real-World Example: Threat Intelligence Platform

At Safaricom, I built KingaSphere CTI using vector search for threat correlation:

// Next.js API route for semantic search
export async function POST(request: Request) {
  const { query } = await request.json();

  // Get query embedding
  const embedding = await getEmbedding(query);

  // Search using pgvector
  const threats = await db.execute(
    sql`SELECT * FROM threats 
        ORDER BY embedding <-> ${embedding}
        LIMIT 10`
  );

  return Response.json({ threats });
}

Key Benefits

Vector search reduced our threat detection time from hours to minutes through automated semantic correlation.

The system now processes 1M+ events daily with 95% accuracy in threat classification.

Implementation Steps

1. Set Up PostgreSQL with pgvector

-- Enable pgvector extension
CREATE EXTENSION vector;

-- Create table with vector column
CREATE TABLE documents (
  id SERIAL PRIMARY KEY,
  content TEXT,
  embedding VECTOR(1536)
);

-- Create index for fast similarity search
CREATE INDEX ON documents
USING ivfflat (embedding vector_cosine_ops);

2. Generate Embeddings

Use OpenAI's embedding API:

import OpenAI from "openai";

const openai = new OpenAI();

async function generateEmbedding(text: string) {
  const response = await openai.embeddings.create({
    model: "text-embedding-3-small",
    input: text,
  });

  return response.data[0].embedding;
}

3. Build the Search API

// app/api/search/route.ts
import { db } from "@/lib/db";

export async function POST(req: Request) {
  const { query } = await req.json();

  const queryEmbedding = await generateEmbedding(query);

  const results = await db.query(
    `SELECT id, content, 
     1 - (embedding <=> $1) as similarity
     FROM documents
     ORDER BY embedding <=> $1
     LIMIT 10`,
    [queryEmbedding]
  );

  return Response.json(results);
}

Performance Optimization

Caching Strategy

import { Redis } from "@upstash/redis";

const redis = Redis.fromEnv();

async function cachedEmbedding(text: string) {
  const cached = await redis.get(`embedding:${text}`);
  if (cached) return cached;

  const embedding = await generateEmbedding(text);
  await redis.set(`embedding:${text}`, embedding, { ex: 3600 });

  return embedding;
}

Batch Processing

For better performance, process embeddings in batches:

# Process 100 texts at once
embeddings = openai.embeddings.create(
    model="text-embedding-3-small",
    input=texts_batch  # List of up to 100 texts
)

Best Practices

  1. Choose the right model - text-embedding-3-small for speed, text-embedding-3-large for accuracy
  2. Normalize vectors - Use cosine similarity for consistent results
  3. Cache embeddings - Reduce API costs by caching frequently used embeddings
  4. Index your vectors - Use IVFFlat or HNSW for fast similarity search
  5. Monitor costs - Track API usage and optimize batch sizes

Conclusion

Vector search unlocks powerful AI capabilities in your applications. Combined with Next.js, you can build intelligent features that understand user intent and deliver relevant results.

The key is to:

  • Start small with a focused use case
  • Optimize for performance with caching
  • Monitor costs and usage
  • Iterate based on user feedback

Want to learn more? Check out my GitHub for example implementations and code samples.