This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| network_stuff:infiniband [2025/07/07 15:20] – jotasandoku | network_stuff:infiniband [2026/02/01 15:22] (current) – removed jotasandoku | ||
|---|---|---|---|
| Line 1: | Line 1: | ||
| - | __**INFINIBAND**__: | ||
| - | \\ | ||
| - | {{ : | ||
| - | \\ | ||
| - | From [[https:// | ||
| - | \\ | ||
| - | The key here is to understand that infiniband is designed so servers and storage talk directly, roughly said, from memory record to memory record. They don't use the classical network stack, allowing them much faster rates, comparable to an internal memory bus. They do this via the remote direct memory access protocol (**RDMA**). | ||
| - | \\ | ||
| - | We can run routing protocols over Infiniband but, in our case, the setup is very simple. Mellanox infiniband switches create a high performance between a cluster of the servers and the DDN storage (controllers in infiniband jargon) | ||
| - | |||
| - | Terms: | ||
| - | * RDMA provides access to the memory from one computer to the memory of another computer without involving either computer’s operating system. This technology enables high-throughput and low-latency networking with low CPU utilization. | ||
| - | * Mellanox provides RDMA via the OFED package | ||
| - | * **LID** : local indentifier (All devices in a subnet have a Local Identifier (LID)). Routing between different subnets is done on the basis of a **Global Identifier (GID)** | ||
| - | * GID: Is another identifier but is to route BETWEEN SUBNETS. Contains : Subnet Prefix and a GUID (Global Unique Identifier). | ||
| - | * NSD (Network Shared Disks): In our context, NSD is the server that connects to the storage via the Mellanox switch. The servers share the NSD's to the clients, creating some sort of distributed logical disk (a bit like the hyperflex technology). Particuartly in our setupm the servers dont share their local disks but they expose the DDN's disks. | ||
| - | * SM (Subnet Manager): | ||
| - | * SM master is the node truly acting as SM. The node with the highest priority [0-15] wins. | ||
| - | * In our setup, servers all have priority 14 while switch has priority 15. | ||
| - | * MAD: Infiniband management datagrams. They use RMPP (Reliable Multi Packet Protocol): | ||
| - | * SRP : Discovers and connects to InfiniBand SCSI RDMA Protocol (SRP) targets in an IB fabric. | ||
| - | * sysimgguid : system identifies | ||
| - | * caguid : nic (hca) identifier | ||
| - | |||
| - | |||
| - | ---- | ||
| - | Useful MLNX-OS commands: | ||
| - | > sh interfaces | i state | ||
| - | ! Below is under privilege exec mode | ||
| - | show run | ||
| - | show interfaces ib status | ||
| - | show guids ! To see the switch group identifier (like the switch main mac address) | ||
| - | fae sminfo | ||
| - | fae ibnetdiscover | ||
| - | | ||
| - | Useful server side ib commands | ||
| - | ibstatus | ||
| - | ibping -S -L 10 ---- ibping -L 20 -c 10 -n 3 # to ping, we need to run one in the server (LID 20) and one in the client (LID 10). This is because even ping makes RDMA calls | ||
| - | |||
| - | |||
| - | ---- | ||
| - | **SUBNET MANAGER (opensm)**\\ | ||
| - | Infiniband subnet manager works in two planes: | ||
| - | * SM-config: For config sync. It happens over mgmt network and relates to configuration and user management | ||
| - | * smnode-OpenSM : Cluster master. SM-master. opensm is a software entity | ||
| - | * sm keeps forwarding state; handout link identifiers (lid, l2 identifier); | ||
| - | |||
| - | Tshoot commands: | ||
| - | show ib smnodes | ||
| - | show ib smnode nyzsfsll51 sm-state | ||
| - | show guids # so we can identify the macs | ||
| - | fae sminfo | ||
| - | fae ibnetdiscover | ||
| - | show ib smnode | ||
| - | |||
| - | |||
| - | If we look at the command prompt of thw two switches we see : | ||
| - | server1[serversmname: | ||
| - | server2 [serversmname: | ||
| - | ^^ the master/ | ||
| - | |||
| - | ---- | ||
| - | |||
| - | |||
| - | Initial setup:\\ | ||
| - | https:// | ||
| - | |||
| - | |||
| - | ---- | ||
| - | |||
| - | MELLANOX UPGRADE | ||
| - | First scp the image to any of the linux servers. Preferably in the same region where the switch is. | ||
| - | \\ | ||
| - | Then do the following om the switch (this is an example): | ||
| - | conf t | ||
| - | image delete XXX // --> delete old images, if exist | ||
| - | image fetch scp:// | ||
| - | image install image-X86_64-3.6.6162.img | ||
| - | image boot next | ||
| - | configuration write | ||
| - | reload | ||
| - | |||
| - | The upgrade itself (after the reload) takes 3-4 minutes. | ||
| - | |||
| - | |||
| - | |||
| - | To downgrade [[https:// | ||
| - | |||
| - | |||
| - | ---- | ||
| - | ===== Simple InfiniBand Troubleshooting Case (1) ===== | ||
| - | |||
| - | **Symptom: | ||
| - | MPI (Message Passing Interface) jobs between two GPU nodes are limited to ~12 Gb/s, while other nodes on the HDR100 fabric reach 100 Gb/s. | ||
| - | |||
| - | **Checks: | ||
| - | * `ibstat` on the slow node shows: | ||
| - | |||
| - | Port 1: State: Active | ||
| - | Physical state: LinkUp | ||
| - | Rate: 25 Gb/ | ||
| - | Width: 1X | ||
| - | |||
| - | → Link has fallen back to 1 lane × 25 Gb/s instead of 4 lanes × 25 Gb/s. | ||
| - | |||
| - | * `iblinkinfo -r` confirms only **lane 1** is active and reports symbol errors on that port. | ||
| - | |||
| - | * `perfquery -x` shows increasing **SymbolErrorCounter** and **LinkErrorRecoveryCounter** on the affected switch port. | ||
| - | |||
| - | **Fix: | ||
| - | Replaced the QSFP56 cable. `ibstat` now reports **Rate 100 Gb/s, Width 4X**. Bandwidth test (`ib_write_bw`) confirms ~97 Gb/s performance. | ||
| - | |||
| - | **Root Cause: | ||
| - | One faulty cable lane caused fallback to 1X. Basic InfiniBand tools helped quickly identify and resolve the issue. | ||
| - | |||
| - | |||
| - | ===== Simple InfiniBand Troubleshooting Case (2) Subnet Manager Misconfiguration ===== | ||
| - | |||
| - | **Symptom: | ||
| - | Some nodes fail to establish MPI communication or exhibit long startup times. `ibstat` shows ports stuck in " | ||
| - | |||
| - | **Checks: | ||
| - | * On affected nodes, `ibstat` shows: | ||
| - | Port 1: State: Down | ||
| - | Physical state: Polling | ||
| - | Rate: 100 Gb/sec (Expected) | ||
| - | → Port sees light but is not becoming Active. | ||
| - | |||
| - | * `ibv_devinfo` shows device present but no active port. | ||
| - | |||
| - | * No errors found in hardware or cabling. | ||
| - | |||
| - | * On a working node, `sminfo` reports a different Subnet Manager than the one expected: | ||
| - | |||
| - | SM lid 1, lmc 0, smsl 0, priority 5, state: master | ||
| - | smguid 0x... " | ||
| - | |||
| - | **Fix: | ||
| - | Two Subnet Managers (SMs) were active with equal priority, causing instability. Disabled one SM (`opensm`) on the unintended node. After restarting `opensm` on the correct node, all ports transitioned to " | ||
| - | |||
| - | **Root Cause: | ||
| - | Multiple Subnet Managers with conflicting roles caused port initialisation to stall or flap. Ensuring a single master SM with correct priority resolved the issue. | ||
| - | |||