loopback through leaf / single nic iperf test
When bringing up new servers, especially with new or untested network interface cards (NICs) or cables, you might want to test their raw network throughput. While standard iperf tests between two different machines are common, sometimes you want to isolate the test to a single NIC, pushing traffic through it and back, effectively looping it at a nearby switch. This verifies the physical path, the SFP, and the NIC's transmit/receive capabilities without involving another server's NIC or network stack.
The challenge, however, is that an iperf server and client running on
the same host will typically use the internal loopback interface (lo
)
for communication, never touching the physical NIC. We need to trick
the Linux network stack into routing traffic out of the specific NIC
and then back in to reach the iperf server on the same machine. This
is where ip rule
and policy-based routing come in.
The Scenario: Looping iperf on enp1s0f0np0
Imagine you have a server with a NIC named enp1s0f0np0
that you want
to test. It's connected to a layer-3 switch. We'll set up a dedicated,
isolated IP address on this NIC and use ip rule
to ensure traffic to
that address always leaves and re-enters via enp1s0f0np0
.
The Initial State: Default Routing Policy
Before we begin, a quick look at the default IP routing policy on a typical Linux host:
# ip rule show
0: from all lookup local
32766: from all lookup main
32767: from all lookup default
This shows the standard lookup order: first, the local
table (for
local addresses), then main
(your primary routing table), and finally
default
(a fallback). Our goal is to insert rules before local
that
force our specific iperf traffic to use a custom route.
Step 1: Configuring the neighbour switch
This may or may not be needed, depending on your setup. If you already have a valid address configured on your host, you might get away with using that. You will need to replace the IP-addresses in the example configurations below.
On a Cumulus switch, configuring a link-local test network with two IPs might look like:
# net add interface swp1 ip address 169.254.101.0/31
# net comm
Here, swp1
is the switch port connected to enp1s0f0np0
. I chose the
169.254.0.0/16 link-local subnet because these addresses are made for
exactly this purpose.
We're handing out 169.254.101.1 to the host and 169.254.101.0 to the switch. You're free to choose anything you like. You can even just re-use any currently configured IP. Make sure you replace the host and gateway IPs in the commands below though.
Step 2: Configuring the Host NIC and Routing Policy
Now for the magic on the host. We'll add the corresponding IP address to our NIC and then, crucially, define a new routing table and rules to direct traffic through it.
Add the host IP to the NIC:
# ip addr add 169.254.101.1/31 dev enp1s0f0np0
Create a custom routing table (table 169
) for our looped traffic. This
says: to reach 169.254.101.1, go via 169.254.101.0 (the switch's IP)
out of enp1s0f0np0
.
# ip route add 169.254.101.1/31 via 169.254.101.0 \
dev enp1s0f0np0 table 169
Policy-based routing rule 1: Prioritize traffic from our test IP
entering enp1s0f0np0
to be handled by the local
table. This is for
the return traffic.
# ip rule add prio 169 from 169.254.101.1 \
iif enp1s0f0np0 lookup local
Policy-based routing rule 2: Prioritize traffic to our test IP to
use our custom table 169
. This forces outgoing traffic destined for
ourselves to go via the switch.
# ip rule add prio 170 to 169.254.101.1 lookup 169
Policy-based routing rule 3: This is a critical adjustment. The
default from all lookup local
rule (prio 0
) would catch our
169.254.101.1 address and loop it internally. We need to make sure our
custom rules come before it.
The trick here is to insert a new lookup local
rule with a higher
priority and then delete the old prio 0
rule. This effectively
re-prioritizes the local
lookup:
# ip rule add prio 171 from all lookup local
# ip rule del prio 0 from all lookup local
The Result: A Tailored Routing Policy
After these changes, your routing policy (ip rule show
) and tables
(ip route show
, ip route show table 169
) should look something
like this:
# ip rule show
169: from 169.254.101.1 iif enp1s0f0np0 lookup local
170: from all to 169.254.101.1 lookup 169
171: from all lookup local
32766: from all lookup main
32767: from all lookup default
# ip route show
default via 10.20.30.1 dev enp1s0f0np0
10.20.30.0/24 dev enp1s0f0np0 proto kernel scope link src 10.20.30.44
169.254.101.0/31 dev enp1s0f0np0 proto kernel scope link src 169.254.101.1
# ip route show table 169
169.254.101.1 via 169.254.101.0 dev enp1s0f0np0
Notice how ip rule
now ensures that any traffic to 169.254.101.1
goes through lookup 169
, forcing it out enp1s0f0np0
via
169.254.101.0 (the switch). When it comes back, ip rule
prio 169
ensures traffic from 169.254.101.1 on interface enp1s0f0np0
is
handled locally. This is key to bypass the standard loopback.
The 10.20.30.44 is just a sample IP that may or may not be configured on your host.
A quick test, comparing the link-local IP to localhost:
# ping -I 169.254.101.1 169.254.101.1
PING 169.254.101.1 (169.254.101.1) from 169.254.101.1 : 56(84) bytes of data.
64 bytes from 169.254.101.1: icmp_seq=1 ttl=63 time=0.578 ms
64 bytes from 169.254.101.1: icmp_seq=2 ttl=63 time=0.538 ms
^C
# ping localhost
PING localhost (127.0.0.1) 56(84) bytes of data.
64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.015 ms
64 bytes from localhost (127.0.0.1): icmp_seq=2 ttl=64 time=0.025 ms
^C
The successful ping -I 169.254.101.1 169.254.101.1
with measurable
latency and ttl of 63 confirms the traffic is indeed leaving and
re-entering the enp1s0f0np0
interface.
Running iperf
With the routing in place, running iperf is straightforward.
On the server, start iperf binding to our test IP:
# iperf -B 169.254.101.1 -s
On the client (the same machine!), connect to the server using the test IP:
# iperf -B 169.254.101.1 -c 169.254.101.1 -t 15 -i 5
The -B
flag ensures iperf binds to our chosen IP address for its
traffic, which our ip rule
configuration will then intercept and route
correctly. You can now observe the throughput directly through your
target NIC.
This method provides a robust way to isolate and thoroughly test a single NIC's performance by forcing traffic through the physical layer, giving you confidence in your new hardware or setup.