<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Concepts on Tetragon - eBPF-based Security Observability and Runtime Enforcement</title>
    <link>/docs/concepts/</link>
    <description>Recent content in Concepts on Tetragon - eBPF-based Security Observability and Runtime Enforcement</description>
    <generator>Hugo</generator>
    <language>en</language>
    <atom:link href="/docs/concepts/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Events</title>
      <link>/docs/concepts/events/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/docs/concepts/events/</guid>
      <description>Tetragon&amp;rsquo;s events are exposed to the system through either the gRPC endpoint or JSON logs. Commands in this section assume the Getting Started guide was used, but are general other than the namespaces chosen and should work in most environments.&#xA;JSON The first way is to observe the raw json output from the stdout container log:&#xA;kubectl logs -n kube-system -l app.kubernetes.io/name=tetragon -c export-stdout -f The raw JSON events provide Kubernetes API, identity metadata, and OS level process visibility about the executed binary, its parent and the execution time.</description>
    </item>
    <item>
      <title>Runtime Hooks</title>
      <link>/docs/concepts/runtime-hooks/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/docs/concepts/runtime-hooks/</guid>
      <description>Applying Kubernetes Identity Aware Policies requires information about Kubernetes (K8s) pods (e.g., namespaces and labels). Based on this information, the Tetragon agent can update the state so that Kubernetes Identify filtering can be applied in-kernel via BPF.&#xA;One way that this information is available to the Tetragon agent is via the K8s API server. Relying on the API server, however, can lead to a delay before the container starts and the policy is applied.</description>
    </item>
    <item>
      <title>Event throttling</title>
      <link>/docs/concepts/cgroup-rate/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/docs/concepts/cgroup-rate/</guid>
      <description>This page shows you how to configure per-cgroup rate monitoring.&#xA;Concept The idea is that tetragon monitors events rate per cgroup and throttle them (stops posting its events) if they cross configured threshold.&#xA;The throttled cgroup is monitored and if its traffic gets stable under the limit again, it stops the cgroup throttling and tetragon resumes receiving the cgroup&amp;rsquo;s events.&#xA;The throttle action generates following events:&#xA;THROTTLE start event is sent when the group rate limit is crossed THROTTLE stop event is sent when the cgroup rate is again below the limit stable for 5 seconds Note The threshold for given cgroup is monitored per CPU.</description>
    </item>
  </channel>
</rss>
