← Back to Docs

DSPEX NUMERICAL AND VECTOR 02 nx

Documentation for DSPEX_NUMERICAL_AND_VECTOR_02_nx from the Ds ex repository.

Yes, absolutely. This is the exact solution needed to fill the numpy gap in the DSPEx architecture and implement the performance-critical retrieval and optimization features.

In short:

  • Nx is your numpy. It provides the multi-dimensional tensors and numerical functions needed for vector math.
  • You can ignore EXLA for now. EXLA is a backend for Nx that makes it run incredibly fast on CPUs/GPUs. The crucial part is that you can write all your logic against the Nx API first, using its default (slower) Elixir backend, and then “turn on” EXLA later for a massive performance boost with minimal code changes.

This is a perfect example of a “separate the logic from the execution engine” design, and you should absolutely leverage it.


A Strategic Plan for Integrating Nx

Here is a phased approach to integrate Nx into DSPEx, addressing the gaps identified previously:

Phase 1: Foundational Integration (Logic First)

Goal: Implement the core vector retrieval logic using the Nx API, without worrying about EXLA or GPU performance yet.

  1. Add Nx as a Dependency: In your mix.exs, add Nx:

    def deps do
      [
        {:nx, "~> 0.9"}
      ]
    end
    
  2. Create a New Retrieval Module: Create a new module, for example lib/dspex/retrieval/vector_search.ex, to encapsulate the vector search logic. This will be the Elixir equivalent of dspy/retrievers/embeddings.py.

  3. Implement the Core Logic with Nx: Translate the numpy logic to Nx. The function names are often identical or very similar.

    numpy (Python DSPy)Nx (Elixir DSPEx)Purpose
    np.array(embeddings)Nx.tensor(embeddings)Store vectors efficiently.
    np.linalg.norm(vec)Nx.LinAlg.norm(vec)Normalize vectors for cosine similarity.
    np.dot(query, corpus)Nx.dot(query, corpus)Calculate similarity scores (highly optimized).
    np.argsort(scores)Nx.argsort(scores)Get the indices of the top-k results without a full sort.

    Example Implementation Snippet:

    # in lib/dspex/retrieval/vector_search.ex
    defmodule DSPEx.Retrieval.VectorSearch do
      alias Nx.LinAlg
    
      @doc """
      Finds the top k most similar vectors from the corpus.
      """
      def find_top_k(query_vector, corpus_tensors, k \\ 5) do
        # Ensure vectors are Nx tensors
        query_tensor = Nx.tensor(query_vector)
        # corpus_tensors should already be a tensor of shape {passage_count, embedding_dim}
    
        # 1. Normalize vectors for cosine similarity
        query_norm = LinAlg.normalize(query_tensor)
        corpus_norm = LinAlg.normalize(corpus_tensors, axis: 1)
    
        # 2. Calculate dot product for similarity scores (highly optimized)
        scores = Nx.dot(query_norm, Nx.transpose(corpus_norm))
    
        # 3. Get the indices of the top k scores
        top_k_indices =
          scores
          |> Nx.argsort(direction: :desc)
          |> Nx.slice_axis(0, k)
          |> Nx.to_flat_list()
    
        # 4. Return the indices of the best passages
        {:ok, top_k_indices}
      end
    end
    

Success Criteria for Phase 1: A fully functional, in-memory vector search retriever that passes all unit and integration tests, using the default pure Elixir backend of Nx.


Phase 2: Performance Optimization (Speed Last)

Goal: Accelerate the now-correct retrieval logic using the EXLA backend.

  1. Add EXLA as a Dependency: In your mix.exs, add exla and configure it as the default backend in config/config.exs:

    # mix.exs
    {:exla, "~> 0.9"}
    
    # config/config.exs
    import Config
    config :nx, :default_backend, EXLA.Backend
    
  2. Use defn for JIT Compilation (Optional but Recommended): Refactor the performance-critical parts of your retrieval module into a defn block. This allows EXLA to just-in-time (JIT) compile the code to highly optimized machine code that runs on the CPU or GPU.

    Example Refactor:

    # in lib/dspex/retrieval/vector_search.ex
    defmodule DSPEx.Retrieval.VectorSearch do
      import Nx.Defn
      alias Nx.LinAlg
    
      defn find_top_k_fast(query_tensor, corpus_tensors, k) do
        query_norm = LinAlg.normalize(query_tensor)
        corpus_norm = LinAlg.normalize(corpus_tensors, axis: 1)
        scores = Nx.dot(query_norm, Nx.transpose(corpus_norm))
        Nx.argsort(scores, direction: :desc) |> Nx.slice_axis(0, k)
      end
    end
    

    The beauty of this is that the core logic does not change. You just wrap it in defn.

Success Criteria for Phase 2: Demonstrable, significant performance improvement in benchmarks after enabling the EXLA backend, with zero or minimal changes to the core retrieval logic written in Phase 1.


Answering Your Second Question: “i asumme we dont need to think about htis yet?” (re: EXLA)

You are 100% correct. You do not need to think about EXLA yet.

Here’s why focusing on Nx first is the right strategy:

  1. Separation of Concerns: Nx is the API. EXLA is the execution engine. Write your code against the stable API first.
  2. Simpler Development: The default Nx backend is pure Elixir. This means you can build and test your entire feature without worrying about system dependencies like CUDA, ROCm, or C++ compilers.
  3. Correctness First, Speed Later: Ensure your vector search logic is mathematically correct and passes all tests. It’s much easier to debug in pure Elixir than on a GPU.
  4. Effortless Optimization: Once your logic is correct, enabling EXLA is primarily a configuration change that will instantly make your correct code much, much faster.

Actionable Next Step

  1. Add {:nx, "~> 0.9"} to your mix.exs.
  2. Create a new lib/dspex/retrieval/ directory and a vector_search.ex file inside it.
  3. Start implementing the vector retrieval logic using the Nx functions mapped out above.
  4. Build unit tests for this new module that create simple tensors and verify that the find_top_k function returns the correct indices.