← Back to Docs

STAGE 2 PROMPT SUMMARY

Documentation for STAGE_2_PROMPT_SUMMARY from the Dspex repository.

Stage 2: Native Implementation - Comprehensive Prompt Strategy

Overview

After creating the detailed technical specification, it’s clear that the original 12 prompts are insufficient to cover the complexity and scope of the native Elixir DSPy implementation. The technical specification reveals significantly more components and integration points than originally anticipated.

Recommendation: Expand to 16 comprehensive prompts to properly handle all components with adequate detail and context.

Revised Prompt Architecture

Foundation Layer (Prompts 1-4)

Core infrastructure and compilation systems

Execution Layer (Prompts 5-8)

Module systems, prediction pipelines, and reasoning patterns

Provider Layer (Prompts 9-12)

External integrations, provider clients, and communication

Production Layer (Prompts 13-16)

Performance, monitoring, deployment, and integration

16 Comprehensive Stage 2 Prompts

Prompt 1: Native Signature Compilation System

File: prompts/stage2_01_signature_compilation.md Focus: Complete native signature compilation with ExDantic deep integration Key Components:

  • Native Elixir signature parsing and compilation
  • ExDantic schema generation and validation integration
  • Multi-provider JSON schema generation (OpenAI, Anthropic, generic)
  • High-performance ETS-based signature caching with intelligent eviction
  • Compile-time optimizations and analysis
  • Type inference and constraint validation

Required Context:

  • Complete DSPy signatures/signature.py analysis and patterns
  • ExDantic core architecture and schema creation patterns
  • Elixir macro system and compile-time processing
  • Provider-specific schema requirements and optimizations
  • Performance optimization techniques for compilation caching

Prompt 2: Advanced Type System and ML-Specific Types

File: prompts/stage2_02_type_system.md Focus: Comprehensive ML type registry with ExDantic integration Key Components:

  • ML-specific type definitions (reasoning_chain, confidence_score, embeddings, etc.)
  • Type validation pipeline with custom validators
  • Type coercion and conversion systems
  • Provider-specific type optimizations
  • Dynamic type registration and management
  • Quality assessment and metrics integration

Required Context:

  • Complete ExDantic type system and TypeAdapter patterns
  • DSPy type requirements and field definitions
  • ML-specific validation patterns and constraints
  • Provider API requirements for different data types
  • Performance considerations for type validation at scale

Prompt 3: Core Module System Architecture

File: prompts/stage2_03_module_system.md Focus: GenServer-based module system with state management Key Components:

  • Native DSPy module behavior using GenServer patterns
  • Parameter tracking and management systems
  • Module composition and dependency management
  • State persistence and recovery mechanisms
  • Module registry and lifecycle management
  • Supervision tree integration for fault tolerance

Required Context:

  • Complete DSPy primitives/module.py analysis and patterns
  • Advanced GenServer patterns for stateful ML operations
  • OTP supervision strategies for module management
  • Parameter tracking and optimization state management
  • Module composition and execution coordination

Prompt 4: Program Execution Engine

File: prompts/stage2_04_program_execution.md Focus: Advanced program orchestration and execution coordination Key Components:

  • Program execution graph and dependency resolution
  • Parallel module execution with coordination
  • Execution context and state management
  • Error handling and recovery strategies
  • Performance monitoring and optimization
  • Resource allocation and cleanup

Required Context:

  • DSPy primitives/program.py analysis and execution patterns
  • Task coordination and parallel execution in Elixir
  • Advanced GenServer coordination patterns
  • Error propagation and recovery strategies
  • Resource management for ML workloads

Prompt 5: Prediction Pipeline System

File: prompts/stage2_05_prediction_pipeline.md Focus: Core prediction execution with monitoring and optimization Key Components:

  • Native prediction engine with comprehensive execution strategies
  • Execution history tracking and analysis
  • Performance metrics collection and analysis
  • Multi-provider execution coordination
  • Adaptive execution strategy selection
  • Result validation and quality assessment

Required Context:

  • Complete DSPy predict/predict.py analysis and patterns
  • Prediction execution strategies and optimizations
  • Provider coordination and fallback mechanisms
  • Performance monitoring and telemetry integration
  • Quality assessment and result validation

Prompt 6: Chain-of-Thought Implementation

File: prompts/stage2_06_chain_of_thought.md Focus: Native CoT reasoning with step validation and quality assessment Key Components:

  • Enhanced signature generation for reasoning steps
  • Step-by-step reasoning validation and consistency checking
  • Reasoning quality assessment and scoring
  • Intermediate result handling and validation
  • Confidence calibration and uncertainty quantification
  • Reasoning chain optimization and improvement

Required Context:

  • DSPy predict/chain_of_thought.py analysis and patterns
  • Reasoning validation techniques and quality metrics
  • Step-by-step validation and consistency checking
  • Advanced prompt engineering for reasoning tasks
  • Quality assessment frameworks for reasoning chains

Prompt 7: React Pattern and Tool Integration

File: prompts/stage2_07_react_patterns.md Focus: React pattern implementation with tool calling and action execution Key Components:

  • React pattern execution with reasoning and action cycles
  • Tool integration and function calling capabilities
  • Action validation and execution management
  • Thought-action-observation cycle implementation
  • Error handling and recovery in multi-step workflows
  • Integration with external tools and APIs

Required Context:

  • DSPy predict/react.py analysis and implementation patterns
  • Tool calling and function execution patterns
  • Multi-step workflow orchestration
  • External API integration and error handling
  • Reasoning and action validation techniques

Prompt 8: Memory Management and Performance Optimization

File: prompts/stage2_08_memory_performance.md Focus: Advanced memory management and performance optimization for ML workloads Key Components:

  • Memory pressure detection and backpressure systems
  • Intelligent garbage collection strategies
  • ETS-based caching with memory-aware eviction
  • Large data streaming and chunked processing
  • Process pool management and scaling
  • Resource utilization monitoring and optimization

Required Context:

  • Advanced Elixir memory management patterns
  • ETS optimization strategies for large datasets
  • Streaming and chunked processing techniques
  • Process pool scaling and load balancing
  • Memory pressure detection and mitigation

Prompt 9: Provider Integration Framework

File: prompts/stage2_09_provider_framework.md Focus: Comprehensive provider integration with native HTTP clients Key Components:

  • Provider behavior definitions and contracts
  • Native HTTP client implementation with connection pooling
  • Provider-specific adapters (OpenAI, Anthropic, Google, etc.)
  • Request/response transformation and validation
  • Provider capability detection and optimization
  • Multi-provider coordination and fallback strategies

Required Context:

  • Complete DSPy adapters/ and clients/ analysis
  • Provider API specifications and requirements
  • HTTP client optimization and connection pooling
  • Provider-specific features and capabilities
  • Error handling and fallback strategies

Prompt 10: Circuit Breakers and Resilience Patterns

File: prompts/stage2_10_resilience_patterns.md Focus: Advanced fault tolerance and resilience for external provider dependencies Key Components:

  • Circuit breaker implementation with exponential backoff
  • Intelligent retry strategies with jitter and provider-specific logic
  • Rate limiting and request throttling
  • Health monitoring and automatic recovery
  • Provider failure detection and isolation
  • Graceful degradation strategies

Required Context:

  • Advanced fault tolerance patterns in distributed systems
  • Circuit breaker and retry strategy implementations
  • Rate limiting algorithms and fairness strategies
  • Health monitoring and failure detection
  • Provider-specific reliability characteristics

Prompt 11: Distributed Computing and Clustering

File: prompts/stage2_11_distributed_computing.md Focus: Multi-node coordination and distributed ML workload management Key Components:

  • Distributed execution coordination across cluster nodes
  • Node capability discovery and workload distribution
  • Distributed caching with consistency guarantees
  • Load balancing and failover mechanisms
  • Cluster health monitoring and automatic rebalancing
  • Network partition handling and recovery

Required Context:

  • Elixir distributed computing patterns and clustering
  • Distributed system consistency and coordination
  • Load balancing algorithms for ML workloads
  • Network partition tolerance and recovery
  • Cluster monitoring and health management

Prompt 12: Optimization and Teleprompt System

File: prompts/stage2_12_optimization_teleprompt.md Focus: Native optimization algorithms and prompt improvement systems Key Components:

  • Signature optimization and prompt tuning algorithms
  • Parameter optimization using Elixir-native approaches
  • Performance measurement and comparison frameworks
  • Automated prompt improvement and testing
  • Optimization history tracking and analysis
  • Multi-objective optimization strategies

Required Context:

  • Complete DSPy teleprompt/ analysis and optimization patterns
  • Optimization algorithm implementations in Elixir
  • Prompt engineering and improvement techniques
  • Performance measurement and comparison frameworks
  • Multi-objective optimization strategies

Prompt 13: Evaluation and Metrics Framework

File: prompts/stage2_13_evaluation_metrics.md Focus: Comprehensive evaluation system with ML-specific metrics Key Components:

  • Native evaluation engine with metric calculation
  • ML-specific metrics and quality assessment
  • Benchmark execution and comparison frameworks
  • Automated evaluation pipelines
  • Result analysis and visualization
  • Performance regression detection

Required Context:

  • Complete DSPy evaluate/ analysis and evaluation patterns
  • ML evaluation metrics and assessment techniques
  • Statistical analysis and comparison methods
  • Automated testing and evaluation pipelines
  • Performance monitoring and regression detection

Prompt 14: Streaming and Async Operations

File: prompts/stage2_14_streaming_async.md Focus: Real-time streaming and asynchronous ML operations Key Components:

  • Streaming response handling and processing
  • Asynchronous execution patterns with GenStage/Flow
  • Real-time data processing pipelines
  • Backpressure management for streaming workloads
  • Live result aggregation and transformation
  • WebSocket and Server-Sent Events integration

Required Context:

  • DSPy streaming/ analysis and streaming patterns
  • GenStage and Flow patterns for data processing
  • Streaming data handling and backpressure management
  • Real-time processing and aggregation techniques
  • WebSocket and SSE integration patterns

Prompt 15: Telemetry and Production Monitoring

File: prompts/stage2_15_telemetry_monitoring.md Focus: Comprehensive telemetry system with ML-specific metrics and alerting Key Components:

  • ML-specific telemetry events and metrics collection
  • Performance monitoring and anomaly detection
  • Alert rules and notification systems
  • Metrics aggregation and export to external systems
  • Dashboard integration and visualization
  • Cost tracking and resource utilization monitoring

Required Context:

  • Advanced Elixir telemetry patterns and monitoring
  • ML-specific metrics and KPIs
  • Alert systems and notification strategies
  • Metrics export and integration with monitoring systems
  • Cost tracking and resource optimization

Prompt 16: Complete Ash Framework Integration and Stage 2 Validation

File: prompts/stage2_16_ash_integration_validation.md Focus: Deep Ash integration and comprehensive Stage 2 completion validation Key Components:

  • Advanced Ash resource patterns for ML operations
  • Domain modeling and relationship management
  • Query engine integration and optimization
  • Action composition and workflow orchestration
  • Resource lifecycle management
  • Complete Stage 2 integration testing and validation
  • Performance benchmarking and production readiness assessment

Required Context:

  • Complete Ash framework advanced patterns and capabilities
  • Domain modeling for ML workflows and operations
  • Resource relationship management and optimization
  • Integration testing and validation strategies
  • Performance benchmarking and production readiness criteria

Prompt Design Principles

1. Complete Self-Containment

Each prompt contains ALL necessary context:

  • Relevant DSPy source code analysis copied in full
  • Complete ExDantic integration patterns and examples
  • Advanced Elixir/OTP implementation guidance
  • Provider-specific requirements and optimizations
  • Comprehensive testing and validation approaches

2. Deep Technical Integration

Each prompt provides:

  • Detailed implementation strategies combining all research
  • Complete code examples with error handling
  • Performance optimization techniques
  • Production readiness considerations
  • Integration with other Stage 2 components

3. Production Quality Focus

All implementations must include:

  • Comprehensive error handling and recovery
  • Performance monitoring and optimization
  • Security best practices and validation
  • Scalability and resource management
  • Operational monitoring and alerting

4. Incremental Complexity

Prompts build systematically:

  • Foundation layer establishes core infrastructure
  • Execution layer builds on foundation with complex workflows
  • Provider layer adds external integrations and resilience
  • Production layer completes with monitoring and deployment

Implementation Strategy

Phase 1: Foundation (Prompts 1-4)

  • Native signature compilation and type systems
  • Core module architecture and execution engine
  • Duration: 3-4 weeks
  • Success Criteria: Basic DSPy operations working natively

Phase 2: Execution (Prompts 5-8)

  • Prediction pipelines and reasoning patterns
  • Memory management and performance optimization
  • Duration: 3-4 weeks
  • Success Criteria: Complex ML workflows executing efficiently

Phase 3: Integration (Prompts 9-12)

  • Provider integrations and distributed computing
  • Optimization and resilience patterns
  • Duration: 3-4 weeks
  • Success Criteria: Production-ready external integrations

Phase 4: Production (Prompts 13-16)

  • Evaluation, monitoring, and Ash integration
  • Complete system validation and deployment
  • Duration: 3-4 weeks
  • Success Criteria: Full production deployment ready

Context Sources for Each Prompt

Research Foundation:

  • DSPy Analysis: Complete source code analysis from comprehensive research
  • ExDantic Integration: Deep integration patterns and advanced features
  • Elixir/OTP Patterns: Advanced patterns for ML workloads and high performance
  • Provider APIs: Complete specifications and optimization strategies

Implementation Guidance:

  • Performance Optimization: Memory management, caching, and scaling strategies
  • Error Handling: Comprehensive error recovery and fault tolerance
  • Monitoring: Telemetry, metrics, and operational observability
  • Testing: Validation, benchmarking, and quality assurance

This expanded 16-prompt strategy ensures comprehensive coverage of all components identified in the technical specification while maintaining manageable scope for each individual prompt.