← Back to Misc

TESTS

Documentation for TESTS from the Foundation repository.

You are correct. While the existing test suite has a solid foundation covering unit, integration, and property-based tests, there are several key areas where its robustness can be significantly improved. The current tests primarily focus on “happy path” integrations and basic invariants.

Here is a proposal for a series of new tests, organized by category, designed to cover gaps in contract adherence, long-term stability, resource management, failure recovery (chaos testing), and performance.


1. Contract Tests (test/contract/)

Gap: The test/contract directory is empty. The library defines several behaviours (Configurable, EventStore, Telemetry) but lacks tests to enforce them. Contract tests ensure that any implementation of these behaviours (including the ones provided) correctly adheres to the defined API, including typespecs and return values.

File: test/contract/configurable_contract_test.exs

Purpose: To verify that Foundation.Services.ConfigServer (and any other future config provider) correctly implements the Foundation.Contracts.Configurable behaviour.

defmodule Foundation.Contract.ConfigurableContractTest do
  use ExUnit.Case, async: false

  # This test runs against any module that implements the Configurable behaviour.
  # We will test the default implementation: Foundation.Services.ConfigServer
  @module Foundation.Services.ConfigServer

  # Setup: Ensure the service is running
  setup do
    Foundation.TestHelpers.ensure_config_available()
    :ok
  end

  describe "Configurable Behaviour Contract for #{@module}" do
    test "get/0 returns a valid Config struct or an error" do
      # Assert the shape and type of the return value
    end

    test "get/1 returns a value for a valid path and an error for an invalid one" do
      # Test with both valid and deeply nested invalid paths
    end

    test "update/2 succeeds for an updatable path and fails for a forbidden one" do
      # Use `updatable_paths/0` to dynamically pick paths for testing
    end
    
    test "validate/1 correctly validates or rejects a Config struct" do
      # Test with both a default config and a known-invalid one
    end

    test "updatable_paths/0 returns a list of lists of atoms" do
      # Assert the structure and type of the returned paths
    end

    test "reset/0 restores the configuration to a default state" do
      # Update a value, then reset, then verify it's back to default
    end

    test "available?/0 returns a boolean" do
      # Assert the return type is always a boolean
    end
  end
end

(Similar contract tests would be created for EventStore and Telemetry contracts.)


2. Smoke Tests (test/smoke/)

Gap: The test/smoke directory is empty. Smoke tests provide a quick, high-level sanity check to ensure the entire system can start and that its most critical functions are operational. This is invaluable for CI/CD pipelines.

File: test/smoke/system_smoke_test.exs

Purpose: To perform a fast, basic check of the entire Foundation system’s health and core functionality after application startup.

defmodule Foundation.Smoke.SystemSmokeTest do
  use ExUnit.Case, async: false
  @moduletag :smoke

  describe "Foundation System Smoke Test" do
    test "application starts and all services report healthy" do
      # This test relies on the default ExUnit setup starting the app.
      # 1. Check overall health
      assert {:ok, health} = Foundation.health()
      assert health.status == :healthy
      assert Foundation.available?()
    end

    test "core APIs are responsive and return correct shapes" do
      # 1. Config: Can we read a value?
      assert {:ok, _} = Foundation.Config.get([:dev, :debug_mode])

      # 2. Events: Can we store and retrieve an event?
      {:ok, event} = Foundation.Events.new_event(:smoke_test, %{time: System.os_time()})
      {:ok, id} = Foundation.Events.store(event)
      assert {:ok, ^event} = Foundation.Events.get(id)

      # 3. Telemetry: Can we emit and get metrics?
      :ok = Foundation.Telemetry.emit_counter([:smoke_test, :run], %{})
      assert {:ok, metrics} = Foundation.Telemetry.get_metrics()
      assert is_map(metrics)

      # 4. Service Registry: Can we look up a core service?
      assert {:ok, _pid} = Foundation.ServiceRegistry.lookup(:production, :config_server)
    end
  end
end

3. Stress & Resilience Tests (test/stress/)

Gap: While concurrency_validation_test.exs exists, it doesn’t test for long-term stability, resource leaks, or system behavior under sustained, chaotic conditions.

File: test/stress/sustained_load_stress_test.exs

Purpose: To bombard the system with a high volume of concurrent operations over a longer duration (e.g., 30-60 seconds) to check for memory leaks, process message queue overloads, and performance degradation.

defmodule Foundation.Stress.SustainedLoadTest do
  use ExUnit.Case, async: false
  @moduletag :stress

  test "system remains stable and responsive under sustained concurrent load" do
    # 1. Start a high number of concurrent tasks (e.g., 100).
    # 2. Each task runs in a loop for 30 seconds, continuously and randomly:
    #    - Reading config values.
    #    - Writing new events.
    #    - Emitting telemetry counters and gauges.
    #    - Querying the EventStore.
    # 3. During the test, periodically check the memory usage of core service PIDs
    #    and their message queue lengths (`Process.info(pid, [:memory, :message_queue_len])`).
    # 4. Assert that memory usage and queue lengths do not grow unbounded and stay
    #    within a reasonable threshold.
    # 5. After the test, assert that the system is still healthy and responsive.
  end
end

File: test/stress/chaos_resilience_test.exs

Purpose: To simulate “chaos engineering” by randomly killing core services while the system is under load and verifying that the supervision tree correctly restarts them, leading to eventual system recovery.

defmodule Foundation.Stress.ChaosResilienceTest do
  use ExUnit.Case, async: false
  @moduletag :stress

  test "system recovers gracefully after random service failures under load" do
    # 1. Start a sustained load (similar to the test above).
    # 2. In a separate process (the "Chaos Monkey"), loop for 20 seconds:
    #    - Every 2-3 seconds, randomly select a core service PID 
    #      (ConfigServer, EventStore, TelemetryService).
    #    - Kill the selected process using `Process.exit(pid, :kill)`.
    # 3. During this time, the load-generating tasks will experience errors. This is expected.
    #    The test should assert that these errors are handled gracefully and do not crash the tasks.
    # 4. After the chaos period, stop the load generators.
    # 5. Wait for a "cool down" period (e.g., 5 seconds) for supervisors to stabilize the system.
    # 6. Assert that `Foundation.health()` returns `:healthy`.
    # 7. Assert that all core API functions are responsive again.
  end
end

4. Benchmark Tests (test/benchmark/)

Gap: There are no performance benchmarks to quantify the latency of critical operations or to detect performance regressions over time.

File: test/benchmark/core_operations_benchmark.exs

Purpose: To establish performance baselines for the most frequent and critical operations using the Benchee library.

defmodule Foundation.Benchmark.CoreOperations do
  use ExUnit.Case, async: false
  @moduletag :benchmark

  # Setup: Ensure all services are running and primed.
  
  # Benchee suite to measure critical path functions.
  Benchee.run(
    %{
      "Config.get/1" => fn -> Foundation.Config.get([:dev, :debug_mode]) end,
      "EventStore.store/1" => fn -> 
        {:ok, event} = Foundation.Events.new_event(:bench, %{})
        Foundation.Events.store(event) 
      end,
      "ServiceRegistry.lookup/2" => fn -> 
        Foundation.ServiceRegistry.lookup(:production, :config_server)
      end,
      "Telemetry.emit_counter/2" => fn ->
        Foundation.Telemetry.emit_counter([:bench, :count], %{})
      end,
      "Infrastructure.execute_protected (no-op)" => fn ->
        Foundation.Infrastructure.execute_protected(:noop, [], fn -> :ok end)
      end
    },
    time: 5, # seconds
    memory_time: 2 # seconds
  )
end

5. Enhanced Integration Tests (test/integration/)

Gap: The current integration tests can be expanded to cover more complex failure and degradation scenarios.

File: test/integration/graceful_degradation_integration_test.exs

Purpose: To test the system-wide behavior when a single service is unavailable for an extended period, focusing on how other services degrade gracefully.

defmodule Foundation.Integration.GracefulDegradationIntegrationTest do
  use ExUnit.Case, async: false

  describe "when ConfigServer is down" do
    test "Events and Telemetry services continue to operate" do
      # 1. Stop ConfigServer.
      # 2. Assert that `Events.store/1` still returns :ok (or a non-crashing error) 
      #    because it should not have a hard dependency.
      # 3. Assert `Telemetry.emit_counter/2` still returns :ok.
      # 4. Restart ConfigServer and verify full functionality is restored.
    end
  end

  describe "when EventStore is down" do
    test "Config updates still succeed and Telemetry is emitted" do
      # 1. Stop EventStore.
      # 2. Assert `Config.update/2` on an updatable path still returns :ok,
      #    even though the corresponding audit event cannot be stored.
      # 3. Assert `Telemetry.get_metrics/0` is still responsive.
      # 4. Restart EventStore and verify it can now store events.
    end
  end
end