Skip to main content

Configure the logger

The logger is stateless until configured. The configure() call (per spec §9.2) sets up the root logger, attaches sinks, applies per-logger severity overrides, and seeds the OTel Resource with process-level attributes. Run it once at application startup, before any business code calls Logger.get(name).

Step 1. Decide the configuration source

In a typical dagstack application, the logger reads its section from an app-config.yaml parsed by dagstack/config. The logger-python package itself does not depend on config-python — the application extracts the logging: section from its Config, dumps it to a dict, and passes the values to configure(). This keeps the two libraries independent and avoids circular dependencies.

A canonical YAML section looks like this:

app-config.yaml
logging:
level: ${LOG_LEVEL:-INFO}

resource:
service.name: ${SERVICE_NAME:-order-service}
service.version: ${SERVICE_VERSION:-dev}
deployment.environment: ${DAGSTACK_ENV:-development}

loggers:
httpx: WARN
urllib3: WARN
order_service.checkout: DEBUG

sinks:
- type: console
mode: ${LOG_CONSOLE_MODE:-auto}
min_severity: ${LOG_LEVEL:-INFO}
- type: file
path: /var/log/order-service.jsonl
max_bytes: 100000000
keep: 10
min_severity: INFO

The fields above match the LoggerSchema from spec §9.2; bindings emit a native schema (Pydantic, zod, Go struct) that validates the section before it reaches the logger.

Step 2. Build sinks from the config

Convert each sink entry into a binding-native sink instance, then pass the list to configure():

from dagstack.logger import ConsoleSink, FileSink, configure


def build_sinks(sink_specs: list[dict]) -> list:
sinks = []
for spec in sink_specs:
kind = spec["type"]
if kind == "console":
sinks.append(ConsoleSink(
mode=spec.get("mode", "auto"),
min_severity=_resolve_severity(spec.get("min_severity", "INFO")),
))
elif kind == "file":
sinks.append(FileSink(
path=spec["path"],
max_bytes=spec.get("max_bytes", 0),
keep=spec.get("keep", 0),
min_severity=_resolve_severity(spec.get("min_severity", "INFO")),
))
else:
raise ValueError(f"unsupported sink type: {kind!r}")
return sinks


def _resolve_severity(value):
# configure() also accepts these strings directly; this helper is
# for sinks where the constructor expects an int.
return {"TRACE": 1, "DEBUG": 5, "INFO": 9, "WARN": 13, "ERROR": 17, "FATAL": 21}[value.upper()]

Step 3. Call configure() at startup

from dagstack.config import Config
from dagstack.logger import Logger, configure


def bootstrap():
config = Config.load("app-config.yaml")
log_section = config.get("logging", default={})

configure(
root_level=log_section.get("level", "INFO"),
sinks=build_sinks(log_section.get("sinks", [])),
per_logger_levels=log_section.get("loggers", {}),
resource_attributes=log_section.get("resource", {}),
)

# Now business code can resolve loggers by name.
Logger.get("order_service.bootstrap").info("logger configured")


if __name__ == "__main__":
bootstrap()
run_application()

What configure() does

Per spec §9.2 the call:

  1. Resolves root_level (string "INFO" or numeric 9) into a severity number and applies it to the root logger.
  2. Replaces the root logger's sinks with the supplied list. Children of the root inherit the sinks unless overridden.
  3. For each entry in per_logger_levels, applies a severity override to that named logger. The override sticks even after children are created.
  4. If resource_attributes is non-empty, builds a Resource and attaches it to the root logger; every record inherits it (unless a child logger sets its own Resource).

The call is idempotent — calling configure() again replaces the previous setup atomically. In-flight records emitted by other threads complete against the old configuration before the new sinks take over.

Step 4. Per-logger overrides

The per_logger_levels argument silences noisy third-party loggers and elevates the verbosity of a specific module:

configure(
root_level="INFO",
sinks=[ConsoleSink(mode="auto")],
per_logger_levels={
"httpx": "WARN",
"urllib3": "WARN",
"order_service.checkout": "DEBUG",
},
resource_attributes={"service.name": "order-service"},
)

The override applies even if Logger.get("order_service.checkout") is called after configure() — the registry's lookup looks up the per-logger-level map before returning the cached or freshly-created logger.

Common pitfalls

  • Calling configure() after the first emit. Records emitted before the call go to the bootstrap default (a plain ConsoleSink in pretty mode, severity floor INFO). Call configure() as the first line of your startup function.
  • Forgetting service.name. OTel observability backends key all records by Resource.service.name; without it, your records land in an unattributed bucket. Always set it via resource_attributes.
  • Two different service.versions in two replicas. Set service.version from the build's git SHA or release tag, not from a runtime variable that drifts between replicas.
  • Sink severity below logger severity. The logger applies its own min_severity filter before fan-out. If root_level=INFO and a sink declares min_severity=DEBUG, the sink still receives only INFO+ records — the logger drops DEBUG early.

Register an atexit / signal handler that flushes the logger:

import atexit
from dagstack.logger import Logger

@atexit.register
def shutdown_logger():
Logger.get("").flush(timeout=5.0)
Logger.get("").close()

Without graceful shutdown, buffered records (Phase 2 sinks with background workers) may be lost on process exit. Phase 1 sinks (ConsoleSink, FileSink, InMemorySink) write synchronously, so the loss window is small but non-zero.

See also