Skip to content

Architecture

Relevant source files

The following files were used as context for generating this wiki page:

This document describes the overall architecture of eCapture, explaining how the system is structured into layers and how data flows from the command-line interface through eBPF probes to final output. The architecture follows a five-layer design: CLI Layer → Module Orchestration → eBPF Execution → Event Processing → Output.

For details on specific capture modules (OpenSSL, GoTLS, etc.), see Capture Modules. For information about the eBPF implementation, see eBPF Engine. For event processing internals, see Event Processing Pipeline.


System Overview

eCapture is organized as a modular eBPF-based capture system. The architecture separates concerns into distinct layers, allowing new capture modules to be added without modifying core infrastructure. Each module implements the IModule interface and manages its own eBPF programs, while sharing common event processing and output mechanisms.

Sources: README.md:36-44, cli/cmd/root.go:44-51, user/module/imodule.go:47-75


Five-Layer Architecture

Architecture Overview: Five distinct layers with clear separation of concerns

The architecture consists of five primary layers:

  1. CLI Layer: Parses commands and flags, manages configuration
  2. Module Orchestration Layer: Implements the IModule interface pattern, coordinates module lifecycle
  3. eBPF Execution Layer: Loads and manages eBPF programs, attaches probes to target functions
  4. Event Processing Layer: Aggregates and parses raw eBPF events into structured data
  5. Output Layer: Formats and writes processed events to various destinations

Sources: cli/cmd/root.go:80-133, user/module/imodule.go:47-75, user/module/probe_openssl.go:83-106


CLI Layer

The CLI layer is implemented using the Cobra framework and provides the entry point for all eCapture operations.

CLI Command Flow: From user input to module execution

The rootCmd in cli/cmd/root.go:81-113 is the root Cobra command. It defines global flags that apply to all subcommands:

FlagTypePurposeDefault
--pid / -puint64Target process ID (0 = all processes)0
--uid / -uuint64Target user ID (0 = all users)0
--debug / -dboolEnable debug loggingfalse
--btf / -buint8BTF mode (0=auto, 1=core, 2=non-core)0
--mapsizeinteBPF map size per CPU (KB)1024
--logaddr / -lstringLogger output address""
--listenstringHTTP API listen address"localhost:28256"

Each subcommand (e.g., tls, gotls, bash) eventually calls runModule() at cli/cmd/root.go:250-403, which:

  1. Creates module-specific configuration from global configuration using setModConfig() cli/cmd/root.go:157-175
  2. Initializes loggers and event collectors cli/cmd/root.go:282-295
  3. Starts an HTTP server for runtime configuration updates cli/cmd/root.go:313-322
  4. Initializes the module via IModule.Init() cli/cmd/root.go:352-356
  5. Runs the module via IModule.Run() cli/cmd/root.go:358-362
  6. Handles signals for reload or shutdown cli/cmd/root.go:367-396

Sources: cli/cmd/root.go:80-154, cli/cmd/root.go:157-175, cli/cmd/root.go:250-403


Module Orchestration Layer

The module orchestration layer is centered around the IModule interface, which all capture modules implement.

IModule Interface and Implementations

The IModule interface at user/module/imodule.go:47-75 defines the contract for all capture modules:

  • Init(context.Context, *zerolog.Logger, config.IConfig, io.Writer) error: Initialize the module with context, logger, configuration, and event writer
  • Name() string: Return the module name
  • Start() error: Start the eBPF programs and attach probes
  • Run() error: Begin reading events from eBPF maps
  • Events() []*ebpf.Map: Return the eBPF maps that contain events
  • DecodeFun(*ebpf.Map) (event.IEventStruct, bool): Return the decoder function for a specific map
  • Dispatcher(event.IEventStruct): Process and route decoded events
  • Close() error: Clean up resources

The Module base class at user/module/imodule.go:83-108 provides common functionality:

Sources: user/module/imodule.go:47-108, user/module/imodule.go:236-262, user/module/imodule.go:285-391


Module Lifecycle

The module lifecycle follows a three-phase pattern: Init → Run → Close.

Module Lifecycle: Three-phase initialization, execution, and cleanup

Init Phase

The Init() method performs module initialization:

  1. Context and logger setup at user/module/imodule.go:111-127
  2. BTF detection using autoDetectBTF() at user/module/imodule.go:173-190
  3. Kernel version check at user/module/imodule.go:140-149
  4. EventProcessor creation at user/module/imodule.go:127
  5. Child-specific initialization (e.g., OpenSSL version detection at user/module/probe_openssl.go:109-176)

Run Phase

The Run() method orchestrates execution:

  1. Call Start() on the child module at user/module/imodule.go:239-242
  2. Start event reading goroutines at user/module/imodule.go:256-259
  3. Start EventProcessor at user/module/imodule.go:249-254
  4. Read events from eBPF maps at user/module/imodule.go:285-305

The Start() method (implemented by child modules) loads and attaches eBPF programs:

  1. Setup managers based on capture mode at user/module/probe_openssl.go:284-300
  2. Load bytecode from embedded assets at user/module/probe_openssl.go:310-326
  3. Initialize bpfManager at user/module/probe_openssl.go:320-326
  4. Start bpfManager (attach probes) at user/module/probe_openssl.go:328-331
  5. Initialize decode functions at user/module/probe_openssl.go:333-347

Close Phase

The Close() method performs cleanup:

  1. Stop bpfManager and detach probes at user/module/probe_openssl.go:352-357
  2. Close EventProcessor at user/module/imodule.go:458-459
  3. Close event readers at user/module/imodule.go:453-457

Sources: user/module/imodule.go:111-171, user/module/imodule.go:236-262, user/module/probe_openssl.go:109-176, user/module/probe_openssl.go:280-350


eBPF Execution Layer

The eBPF execution layer manages the loading, initialization, and lifecycle of eBPF programs.

eBPF Program Loading and Attachment

Bytecode Selection

eCapture uses different eBPF bytecode files depending on:

  1. Target library version: OpenSSL 1.0.x, 1.1.x, 3.0.x, 3.x, BoringSSL variants user/module/probe_openssl.go:178-278
  2. CO-RE support: Kernel BTF availability determines CO-RE vs non-CO-RE bytecode user/module/imodule.go:173-190
  3. Kernel version: Kernels < 5.2 have different limitations user/module/imodule.go:140-149

The geteBPFName() method at user/module/imodule.go:191-214 selects the appropriate bytecode file by appending _core.o or _noncore.o to the base filename.

Manager Initialization

The bpfManager from the ebpfmanager library manages eBPF program lifecycle:

  1. Load bytecode from embedded assets via assets.Asset() user/module/probe_openssl.go:312-317
  2. Initialize manager with InitWithOptions() user/module/probe_openssl.go:320-326
  3. Start manager to attach probes with Start() user/module/probe_openssl.go:328-331

The bpfManagerOptions struct contains:

  • Constants: Target PID, UID, kernel version flags user/module/probe_openssl.go:361-395
  • Probes: List of uprobe/kprobe/TC programs to attach
  • Maps: References to eBPF maps for event reading

Event Maps

Each module defines eBPF maps for event collection:

Sources: user/module/probe_openssl.go:178-278, user/module/probe_openssl.go:280-350, user/module/imodule.go:173-214, user/module/imodule.go:308-391


Event Processing Layer

The event processing layer aggregates raw eBPF events, buffers payloads, and parses protocol data. For detailed information, see Event Processing Pipeline.

Event Processing: Aggregation, buffering, and parsing

Event Decoding

Raw bytes from eBPF maps are decoded into event structures:

  1. Get decoder function via DecodeFun() user/module/imodule.go:228-230
  2. Decode bytes into event struct via Decode() user/module/imodule.go:393-406
  3. Dispatch event via Dispatcher() user/module/imodule.go:408-448

Event Processor

The EventProcessor at user/module/imodule.go:127 manages worker pools:

  • UUID-based routing: Events with the same UUID (connection ID) go to the same worker
  • Worker lifecycle: Workers are created on-demand and destroyed after inactivity
  • Buffered accumulation: Workers accumulate event fragments before parsing

See Event Processing Pipeline for implementation details.

Sources: user/module/imodule.go:285-448, user/module/probe_openssl.go:741-783


Output Layer

The output layer formats processed events and writes them to configured destinations.

Output Formatting and Destinations

Output Format Selection

The output format is determined by the eventCollector writer type:

The format is applied in Module.output() at user/module/imodule.go:461-479:

if m.eventOutputType == codecTypeProtobuf {
    // Marshal to protobuf
    le := new(pb.LogEntry)
    le.LogType = pb.LogType_LOG_TYPE_EVENT
    ep := e.ToProtobufEvent()
    ...
} else {
    // Convert to string
    s := e.String()
    ...
}

Output Destinations

Output destinations are configured via the --logaddr and --eventaddr flags:

Destination TypeFlag FormatImplementation
Stdout (default)(none)zerolog.ConsoleWriter to os.Stdout
File/path/to/file.logos.Create() file handle
TCPtcp://host:portnet.Dial("tcp", addr)
WebSocketws://host:port/pathws.NewClient().Dial()

Logger initialization at cli/cmd/root.go:178-247 creates appropriate writers based on the address format.

Module-Specific Output

Some modules have specialized output modes:

See Output Formats for details on each format.

Sources: user/module/imodule.go:111-127, user/module/imodule.go:461-479, cli/cmd/root.go:178-247, user/config/iconfig.go:73-79


Data Flow Summary

The complete data flow through the architecture:

  1. User executes CLI commandrootCmd.Execute() parses flags
  2. Subcommand handler calls runModule() with module name and config
  3. Module initializationIModule.Init() detects libraries, selects bytecode
  4. Module startIModule.Run() loads eBPF, attaches probes, starts event processor
  5. eBPF probes capture data in kernel, write to maps
  6. Event readers poll maps, decode bytes into event structs
  7. Dispatcher routes events to event processor or module-specific handlers
  8. Event processor aggregates fragments, buffers payloads, parses protocols
  9. Output formatters convert to text or protobuf
  10. Writers send to stdout, file, TCP, or WebSocket

This architecture provides:

  • Modularity: New modules implement IModule without changing core code
  • Flexibility: Multiple output formats and destinations
  • Performance: Asynchronous event processing with worker pools
  • Extensibility: Protocol parsers and output writers are pluggable

Sources: cli/cmd/root.go:250-403, user/module/imodule.go:236-262, user/module/imodule.go:285-448

Architecture has loaded