Simulation Foundations

Simulation Foundations

This page captures the authoritative structure of the distributed ByteBiota runtime. Update it alongside code so newcomers understand how the server, workers, and shared simulation primitives fit together.

Core Components {#core-components}

  • DistributedServer β€” Central coordinator that manages worker registration, assignments, checkpointing, analytics, and shared environment state (src/bytebiota/server/server.py).
  • WorkerManager / StateAggregator / CheckpointService β€” Server subsystems that track worker health, merge execution results, and orchestrate distributed checkpoints (src/bytebiota/server/worker_manager.py, state_aggregator.py, checkpoint_service.py).
  • DistributedWorker β€” Long-running worker loop that registers with the server, pulls assignments, and reports execution telemetry (src/bytebiota/worker/worker.py).
  • LocalExecutor β€” Worker-side execution engine that wraps soup memory, scheduler, reaper, mutation engine, taxonomy, and environment access (src/bytebiota/worker/executor.py).
  • Shared primitives β€” SoupMemory, Scheduler, Reaper, MutationEngine, Environment, Organism, and the ISA module remain the canonical implementations used by both server orchestration and worker execution (src/bytebiota/memory.py, scheduler.py, reaper.py, mutation.py, environment.py, organism.py, isa.py).
  • Configuration β€” Config dataclasses capture tunable parameters for soup, scheduler, mutation, reaper, energy, and environment (src/bytebiota/config.py). ServerConfig and WorkerConfig layer on network and operational knobs.

Lifecycle Overview {#lifecycle-overview}

  1. Server bootstrap β€” create_server() instantiates DistributedServer, wiring run logging, environment construction, WebSocket routing, analytics, and tuning hooks while loading authoritative simulation parameters from ServerConfig (src/bytebiota/server/__init__.py, server.py).
  2. Worker registration β€” DistributedWorker loads or receives Config, constructs a LocalExecutor, restores cached state if available, and registers with the server via ServerSyncClient (src/bytebiota/worker/worker.py).
  3. Assignment cycle β€” WorkerManager issues batches of organism IDs and runtime overrides. Each worker calls LocalExecutor.execute_assignment, which runs organism slices and returns births, deaths, seeds, and local statistics (src/bytebiota/worker/executor.py).
  4. Aggregation & control β€” StateAggregator merges worker results into the global view, drives seed bank updates, and triggers stagnation reseeding. CheckpointService coordinates server and worker checkpoints, while optional hybrid tuning pushes authoritative parameter changes back to workers.

Execution Cycle (LocalExecutor.execute_assignment) {#execution-cycle}

  1. Preparation β€” Received organisms are registered locally and, if needed, prior snapshots are restored to maintain diff-based reporting (src/bytebiota/worker/executor.py).
  2. Time slicing β€” For each organism, _execute_organism_timeslice computes available instructions using the shared Scheduler configuration and enforces execution/runtime limits.
  3. Instruction dispatch β€” The ISA decodes opcodes, builds an ExecutionContext, and invokes handlers that access soup memory, OS calls, mutation engine, reaper, scheduler, and shared environment handles.
  4. Mutation & energy tracking β€” Copy-time/background mutations, energy debits, harvest gains, and reproduction attempts are recorded as the organism progresses through its slice.
  5. Lifecycle events β€” Completed births are materialized via the organism factory, deaths reclaim memory and queue slots, and seed submissions are captured for server-side evaluation.
  6. Environment maintenance β€” Resource regeneration, signal decay, and gradient updates run on executor cadence, using the same parameters as the authoritative server configuration.
  7. Reporting β€” The executor returns full and intermediate deltas (organism snapshots, births, deaths, seeds, and local stats) which the worker connector forwards to StateAggregator.

Scheduling & Energy Economics {#scheduler}

  • Instruction budget: ceil(genome_size ** slicer_power) + base_slice, capped by max_slice (src/bytebiota/scheduler.py).
  • Energy debits use EnergyConfig values and account for opcode family multipliers (src/bytebiota/worker/executor.py, _execute_organism_timeslice).
  • Harvest, task rewards, and attempt stipends replenish energy based on environment and energy configuration.
  • Memory rent and arrears are enforced cooperatively between the executor and the reaper configuration (src/bytebiota/reaper.py).

Mutation Integration {#mutation-integration}

  • MutationEngine tracks global instruction counts to trigger background flips and applies copy-time mutations during genome replication (src/bytebiota/mutation.py, worker/executor.py).
  • Structural mutations (insertions/deletions) occur through MutationEngine.insertion_deletion_mutation, with rates driven by MutationConfig.
  • Worker executors persist mutation stats in the returned telemetry so the server’s analytics pipeline can surface mutation health indicators.

Reproduction Flow {#reproduction-flow}

  1. MAL (memory allocation) and DIVIDE (finalize reproduction) system calls run through OSCalls, which the executor wires to its local memory, scheduler, and reaper instances (src/bytebiota/os_calls.py, worker/executor.py).
  2. Organism.start_reproduction records pending child metadata while the parent copies bytes into allocated soup space (src/bytebiota/organism.py).
  3. When DIVIDE succeeds, organism_factory.create_offspring assembles the child, transfers energy, and inserts it into scheduler/reaper queues immediately after the parent.
  4. If reproduction stalls or DIVIDE fails, _release_pending_child returns soup memory to the free list to avoid leaks.

Environment & Interaction {#environment-integration}

  • The server constructs a shared Environment instance so all workers operate against a consistent resource/signal field (src/bytebiota/server/server.py).
  • Workers mirror regeneration, signal decay, gradients, and hotspot updates in _execute_assignment to keep local view in sync with the server parameters.
  • ISA handlers expose sensing, signaling, storage, and task submission primitives defined in environment.py and documented in ../simulation/opcode-reference.md.

Mortality, Seed Bank, and Diversity {#mortality}

  • Reaper enforces age, error, rent, and memory-pressure policies using thresholds from ReaperConfig. Worker executors apply these rules as part of their slice execution.
  • StateAggregator consolidates deaths, births, and mutation stats, updates global_stats, and triggers stagnation reseeding using SeedBankService.
  • Seed bank quotas, taxonomy tagging, and diversity tracking ensure lineage variety is preserved and surfaced to analytics consumers (src/bytebiota/server/seed_bank_service.py, taxonomy.py).

Observability & Persistence {#observability}

  • run_logger instances capture structured events for both server and workers (src/bytebiota/run_logger.py).
  • AnalyticsService exposes state aggregation, mutation summaries, and diversity metrics over the monitoring APIs.
  • CheckpointService coordinates periodic distributed checkpoints, requesting worker snapshots and persisting server assignments/environment metadata for later restoration (src/bytebiota/server/checkpoint_service.py).
  • Hybrid tuning (optional) records evaluation payloads and authoritative config changes through TuningCheckpointManager.

Extending the Simulation {#extensibility}

  • Add or modify ISA instructions in src/bytebiota/isa.py and update ../simulation/opcode-reference.md.
  • Extend taxonomy heuristics in src/bytebiota/taxonomy.py and document behavioural expectations in ../biology/taxonomy-overview.md.
  • Adjust environment dynamics in src/bytebiota/environment.py, reflecting the change in world-model.md and relevant operational guides.
  • Introduce new mutation behaviours via MutationEngine, ensuring downstream biological documentation (../biology/traits-and-mutations.md) and analytics expectations stay aligned.

Memory System {#memory}

SoupMemory (src/bytebiota/memory.py) manages contiguous allocation, write-access permissions, and template-aware scanning within the soup. Workers rely on it for every MAL/DIVIDE cycle; update this section whenever allocation strategies, mutation-aware writes, or initialization patterns change.

Scheduler {#scheduler-details}

Scheduler (src/bytebiota/scheduler.py) implements size-neutral round-robin slicing with optional priority boosts. Record formula tweaks, queue-mutation rules, or boost heuristics here to keep worker expectations and server analytics synchronized.

Mutation Engine {#mutation-engine}

MutationEngine (src/bytebiota/mutation.py) applies copy-time bit flips, background radiation, and insertion/deletion events according to MutationConfig. Use this section to log new mutation modes or rate adjustments that impact evolutionary dynamics.

Reaper System {#reaper-system}

Reaper (src/bytebiota/reaper.py) enforces mortality via age/error thresholds, rent collection, and memory-pressure sweeps. Document queue-handling logic, repopulation triggers, or protection rules here when adapting the mortality model.

Organism Model {#organism-model}

Organism and organism_factory (src/bytebiota/organism.py) hold register state, reproduction bookkeeping, and metabolic constraints for every agent. Record structural field changes, energy-sharing behaviour, or serialization updates in this section.

OS Calls {#os-calls}

OSCalls (src/bytebiota/os_calls.py) preserves MAL and DIVIDE semantics for distributed execution, coordinating memory allocation, scheduler insertion, reaper bookkeeping, and seed capture. Reflect any reproduction guard changes or allocation retries here.

Configuration {#configuration}

Config dataclasses (src/bytebiota/config.py) expose tunable parameters for soup, scheduler, mutation engine, reaper, environment, energy, and evolution, while ServerConfig/WorkerConfig add orchestration concerns. Synchronize default adjustments and new knobs here to keep experiment setup reproducible.

Keep this foundation page close to the code: every architectural change should be reflected here to keep onboarding and research notes authoritative.