Kategorie: Linux
Erstellt: 03.03.2020
Dieses Dokument beschreibt die Optimierung von Servereinstellungen von sehr beschäftigten Netzwerkservern (Web, DNS, …) mit viel Arbeitsspeicher und Bandbreite.
Die Standardwerte eines “typischen” Linux-Servers sind normalerweise sehr gut. Ändern Sie diese Einstellungen nur, wenn Sie das wirklich müssen, und wenn Sie genau wissen was Sie tun.
Die folgenden Einstellungen basieren auf dem Kernel sysctl configuration file for Linux von Michiel Klaver.
Erstellen/bearbeiten Sie die Datei /etc/sysctl.d/network-tuning.conf
(oder herunterladen):
# Use BBR TCP congestion control and set tcp_notsent_lowat to 16384 to ensure HTTP/2 prioritization works optimally
# Do a 'modprobe tcp_bbr' first (kernel > 4.9)
# Fall-back to htcp if bbr is unavailable (older kernels)
# Default: tcp_congestion_control=cubic, tcp_notsent_lowat=-1
net.ipv4.tcp_congestion_control = htcp
net.ipv4.tcp_congestion_control = bbr
net.ipv4.tcp_notsent_lowat = 16384
# For servers with tcp-heavy workloads, enable 'fq' queue management scheduler (kernel > 3.12)
# Default: pfifo_fast
net.core.default_qdisc = fq
# Increase the read-buffer space allocatable
# Default: tcp_rmem=4096 87380 6291456, udp_rmem_min=4096, rmem_default=212992, rmem_max=212992
net.ipv4.tcp_rmem = 8192 87380 16777216
net.ipv4.udp_rmem_min = 16384
net.core.rmem_default = 262144
net.core.rmem_max = 16777216
# Increase the write-buffer-space allocatable
# Default: tcp_wmem=4096 16384 4194304, udp_wmem_min=4096, wmem_default=212992, wmem_max=212992
net.ipv4.tcp_wmem = 8192 65536 16777216
net.ipv4.udp_wmem_min = 16384
net.core.wmem_default = 262144
net.core.wmem_max = 16777216
# Increase number of incoming connections
# Default: 128
net.core.somaxconn = 32768
# Increase number of incoming connections backlog
# Default: 1000
net.core.netdev_max_backlog = 16384
# Increase the maximum amount of option memory buffers
# Default: 20480
net.core.optmem_max = 65535
# Increase the tcp-time-wait buckets pool size to prevent simple DOS attacks
# Default: 131072
net.ipv4.tcp_max_tw_buckets = 1440000
# try to reuse time-wait connections, but don't recycle them (recycle can break clients behind NAT)
# Default: tcp_tw_recycle=0, tcp_tw_reuse=0
net.ipv4.tcp_tw_recycle = 0
net.ipv4.tcp_tw_reuse = 1
# Limit number of orphans, each orphan can eat up to 16M (max wmem) of unswappable memory
# Default: 131072
net.ipv4.tcp_max_orphans = 16384
# Limit the maximum memory used to reassemble IP fragments (CVE-2018-5391)
# Default: ipfrag_low_thresh=3145728, ipfrag_high_thresh=4194304
net.ipv4.ipfrag_low_thresh = 196608
net.ipv6.ip6frag_low_thresh = 196608
net.ipv4.ipfrag_high_thresh = 262144
net.ipv6.ip6frag_high_thresh = 262144
# don't cache ssthresh from previous connection
# Default: 0
net.ipv4.tcp_no_metrics_save = 1
# Avoid falling back to slow start after a connection goes idle
# keeps our cwnd large with the keep alive connections (kernel > 3.6)
# Default: 1
net.ipv4.tcp_slow_start_after_idle = 0
# Decrease the default time for tcp_fin_timeout
# (connections in state TIME_WAIT)
# Default: 60
net.ipv4.tcp_fin_timeout = 15
# Decrease the time default value for connections to keep alive
# Default: tcp_keepalive_time=7200, tcp_keepalive_probes=9, tcp_keepalive_intvl=75
net.ipv4.tcp_keepalive_time = 300
net.ipv4.tcp_keepalive_probes = 5
net.ipv4.tcp_keepalive_intvl = 15
# This will enusre that immediatly subsequent connections use the new values
net.ipv4.route.flush = 1
net.ipv6.route.flush = 1
Um die neuen Einstellungen anzuwenden, führen Sie folgenden Befehl aus:
sysctl --system