NVIDIA Quantum MQM8790-HS2R 200G InfiniBand Switch Unmanaged, 40-Port 16Tb/s, C2P Airflow UFM Ready

Productdetails:

Merknaam: Mellanox
Modelnummer: MQM8790-HS2R (920-9B110-00RH-0D0)
Document: MQM8700 series.pdf

Betalen & Verzenden Algemene voorwaarden:

Min. bestelaantal: 1 stks
Prijs: Negotiate
Verpakking Details: Buitenste doos
Levertijd: Op basis van inventarisatie
Betalingscondities: T/T
Levering vermogen: Levering door project/batch
Beste prijs Contact

Gedetailleerde informatie

Model-NR.: MQM8790-HS2R transmissiesnelheid: 10/100/1000Mbps
Poorten: ≧ 48 Technologie: Infiniband
Transportpakket: Verpakking Handelsmerk: Mellanox
Markeren:

NVIDIA Quantum InfiniBand switch 200G

,

40-port unmanaged InfiniBand switch

,

16Tb/s Mellanox network switch

Productomschrijving

NVIDIA Quantum MQM8790-HS2R 200G InfiniBand Switch | Unmanaged, 40-Port 16Tb/s, C2P Airflow | UFM Ready

High‑performance fixed‑configuration unmanaged switch delivering 40 ports of 200Gb/s (or 80 ports of 100Gb/s) with 16 Tb/s non‑blocking throughput, in‑network computing acceleration, and ultra‑low latency — purpose‑built for HPC, AI clusters, and hyperscale data centers. Designed for external management via NVIDIA UFM™ platform with C2P airflow configuration.

Model: MQM8790-HS2R C2P Airflow | AC PSU | Unmanaged 1U Rackmount | UFM Ready
Total Bandwidth
16 Tb/s
non‑blocking
Port Speed
200Gb/s
per port / 100Gb/s splitting
Port Density
40 QSFP56
or 80x 100Gb/s
Latency
Sub-130ns
cut‑through switching
Management
External (UFM)
Unmanaged SKU
Product Overview

The NVIDIA Quantum MQM8790‑HS2R is an unmanaged 200G InfiniBand smart switch designed for large‑scale data center deployments where centralized fabric management is preferred. Part of the NVIDIA Quantum QM8700 series, this switch delivers forty 200Gb/s ports in a compact 1U form factor with 16 Tb/s aggregate non‑blocking throughput and sub‑130ns cut‑through latency. Unlike its managed counterparts, the MQM8790‑HS2R is optimized for external management via NVIDIA Unified Fabric Manager (UFM™), enabling data center operators to efficiently provision, monitor, troubleshoot, and maintain modern data center fabrics at scale.

Each 200Gb/s QSFP56 port can be split into two independent 100Gb/s ports, providing up to 80 ports of 100Gb/s connectivity — ideal for double‑density top‑of‑rack deployments. The MQM8790‑HS2R features C2P (power‑to‑port) airflow, dual redundant power supplies, and full support for NVIDIA SHARP™ in‑network computing acceleration.

Key Features
  • 200Gb/s InfiniBand per port – forty QSFP56 ports supporting 200G or 100G split modes, non‑blocking architecture.
  • Unmanaged SKU for External Control – No on‑board Subnet Manager; designed for centralized management via NVIDIA UFM™ platform.
  • In‑Network Computing Acceleration – NVIDIA SHARP™ technology enables in‑switch data aggregation, reducing MPI, NCCL, and SHMEM communication time by orders of magnitude.
  • High Radix & Split Capability – Convert 40x 200G ports into 80x 100G ports for double‑density topologies without extra switches.
  • Advanced Congestion Management – Adaptive routing, static routing, and quality of service (QoS) to eliminate hot spots and maximize effective fabric bandwidth.
  • Redundant & Hot‑Swappable PSU – 1+1 redundant power, 80 Plus Gold certified, ENERGY STAR compliant, with power optimization on partial port usage.
  • C2P Airflow Configuration – Power‑to‑port airflow direction ideal for data center cooling schemes that require rear‑to‑front airflow.
  • UFM Ready – Seamless integration with NVIDIA Unified Fabric Manager for advanced telemetry, predictive analytics, and automated fabric orchestration.
  • Backward Compatible – Seamless interoperability with previous InfiniBand generations (EDR, FDR).
Technology: In‑Network Computing & SHARP™

NVIDIA Quantum switches embed scalable hierarchical aggregation and reduction protocol (SHARP) engines directly in the silicon. Data traversing the switch can be processed — aggregated, reduced, or broadcast — without multiple round‑trips to server endpoints. This dramatically accelerates collective operations like all‑reduce, barrier, and broadcast, which are critical for deep learning frameworks (TensorFlow, PyTorch via NCCL) and MPI‑based HPC simulations. The result is up to 10x performance gains for communication‑intensive workloads and reduced CPU overhead, freeing compute resources for actual application processing.

The MQM8790‑HS2R also supports adaptive routing and congestion control algorithms that automatically balance traffic across multiple paths, delivering near‑line‑rate throughput even under high contention.

Typical Deployments
  • Large‑Scale AI & ML Clusters – GPU‑based systems requiring centralized fabric management and telemetry across hundreds or thousands of nodes.
  • High‑Performance Computing (HPC) – Research labs, national labs, and universities where external fabric managers provide enhanced monitoring and automation.
  • Hyperscale Data Centers – Fat‑tree, DragonFly+, and multi‑dimensional torus topologies managed through UFM for maximum efficiency.
  • Enterprise & Cloud Service Providers – Environments requiring unified control plane across multiple switch fabrics.
  • Top‑of‑Rack (ToR) with Centralized Management – Double‑density 100Gb/s per server connectivity with fabric‑wide visibility, utilizing C2P airflow for rear‑to‑front cooling designs.
Compatibility

The MQM8790‑HS2R works seamlessly with NVIDIA ConnectX‑6, ConnectX‑7, and BlueField DPU adapters, supporting both InfiniBand and mixed fabrics. It is backward compatible with previous InfiniBand speeds (EDR 100Gb/s, FDR 56Gb/s). Fully interoperable with existing NVIDIA Quantum fabric switches. For management, the switch is designed to be controlled via NVIDIA Unified Fabric Manager (UFM) platform, which provides comprehensive fabric provisioning, monitoring, and predictive troubleshooting. Operating system support includes major Linux distributions (RHEL, Ubuntu, Rocky Linux) and NVIDIA certified GPU servers.

Note on Management: MQM8790-HS2R is an unmanaged switch variant. It does not include an on‑board Subnet Manager or MLNX‑OS CLI/WebUI. Fabric management must be handled externally via NVIDIA UFM or third‑party InfiniBand Subnet Managers.
Specifications
Parameter Detail
Model Number MQM8790-HS2R
Ports & Speed 40 QSFP56 ports; up to 200Gb/s per port; supports split into 80 ports of 100Gb/s
Aggregate Throughput 16 Tb/s non‑blocking
Switching Latency < 130ns (cut‑through)
Management Unmanaged — external management via NVIDIA UFM™; on‑board x86 dual core CPU (Broadwell ComEx D‑1508 2.2GHz), 8GB system memory
Power Supply 1+1 redundant hot‑swappable, 100‑127VAC / 200‑240VAC, 80 Plus Gold, ENERGY STAR
Airflow C2P (power‑to‑port) — MQM8790-HS2R, standard depth
Dimensions (HxWxD) 1.7 x 17 x 23.2 in (43.6 x 433.2 x 590.6 mm), 1U
Weight With 2 PSUs: 12.48 kg / 27.5 lbs
Operating Temperature 0°C to 40°C
Certifications CE, FCC, VCCI, ICES, RCMS, RoHS compliant
Warranty 1 year limited hardware warranty (extendable options available)
Selection Guide
Orderable Part Number (OPN) Description Airflow Management
MQM8790-HS2R NVIDIA Quantum 200Gb/s InfiniBand switch, 40 QSFP56, dual AC PSU, x86 dual core, standard depth, C2P airflow, rail kit C2P (Power to Port) Unmanaged (UFM ready)
MQM8790-HS2F Same as above but P2C airflow (port‑to‑power) P2C Unmanaged (UFM ready)
MQM8700-HS2F Managed variant, P2C airflow, on‑board Subnet Manager, MLNX-OS P2C Managed (MLNX-OS)
MQM8700-HS2R Managed variant, C2P airflow, on‑board Subnet Manager, MLNX-OS C2P Managed (MLNX-OS)

For environments requiring centralized fabric management across hundreds or thousands of switches, the MQM8790‑HS2R offers a cost‑effective, unmanaged building block optimized for NVIDIA UFM orchestration, with C2P airflow suitable for power‑to‑port cooling configurations.

Advantages Over Traditional Switching
  • Centralized Fabric Management – Leverage NVIDIA UFM for unified visibility, automation, and predictive analytics across the entire fabric.
  • Superior ROI – Reduce capital expenditure with double‑density 100G port capacity and lower switch count for large fabrics.
  • Energy Efficient – Dynamic power scaling based on port utilization, lowering operational costs.
  • SHARP™ Acceleration – Up to 10x faster collective communications without consuming host CPU cycles.
  • Scalable Topologies – Native support for Fat Tree, DragonFly+, and Torus to future‑proof data center growth.
  • Flexible Airflow Options – C2P configuration matches rear‑to‑front cooling architectures common in modern data centers.
  • Proven Ecosystem – Backed by NVIDIA cumulative software stack and 24/7 partner support.
Service & Support

Starsurge Group provides end‑to‑end lifecycle services for NVIDIA Quantum switches, including pre‑sales architecture consulting, proof‑of‑concept testing, and global logistics. Our experienced technical team offers remote troubleshooting, firmware upgrades, and RMA coordination. For UFM deployments, we offer professional services for platform configuration and integration. Warranty extension options and 24x7 priority support available upon request. Multilingual support for EMEA, Americas, and APAC regions ensures rapid response for mission‑critical deployments.

Frequently Asked Questions
Q: What is the difference between MQM8790-HS2R and MQM8700-HS2R?
A: MQM8790-HS2R is an unmanaged switch designed for external management via NVIDIA UFM. MQM8700-HS2R includes an on‑board Subnet Manager and full MLNX‑OS for standalone management. Both feature C2P airflow.
Q: Can I use MQM8790-HS2R without UFM?
A: Yes, it can be managed by any standards‑compliant InfiniBand Subnet Manager, but NVIDIA UFM is recommended for advanced features like telemetry, predictive analytics, and automated orchestration.
Q: What is C2P airflow and when should I use it?
A: C2P (power‑to‑port) airflow directs air from the power supply side toward the port side. It is commonly used in data centers where cold air is supplied from the rear of the rack.
Q: Can I use this switch with 100Gb/s ConnectX‑6 adapters?
A: Yes, each 200Gb/s port can operate at 100Gb/s using splitter cables or QSFP56 to dual QSFP56 breakout, supporting up to 80x 100Gb/s links.
Q: Does it support SHARP™ in‑network computing?
A: Yes, the MQM8790-HS2R fully supports NVIDIA SHARP™ technology for collective communication acceleration, independent of management mode.
Precautions & Installation Notes
  • Ensure ambient operating temperature remains between 0°C and 40°C; maintain proper rack ventilation.
  • Use only qualified QSFP56 optics or DAC cables listed in NVIDIA compatibility guide.
  • Airflow direction: MQM8790-HS2R uses C2P (power‑to‑port) — confirm your rack cooling scheme matches C2P airflow (cold air from rear).
  • Power supply must be connected to appropriate AC voltage (100‑240VAC) with grounding.
  • External Subnet Manager (UFM or other) must be deployed for fabric initialization and management.
  • Weight ~12.5kg with two PSUs — use proper mechanical lift for rack mounting.
About Starsurge Group

Hong Kong Starsurge Group Co., Limited is a technology‑driven provider of network hardware, IT services, and system integration solutions. Founded in 2008, the company serves customers worldwide with products including network switches, NICs, wireless access points, controllers, cabling, and infrastructure equipment. Backed by an experienced sales and technical team, Starsurge supports industries such as government, healthcare, manufacturing, education, finance, and enterprise.

With a customer‑first approach, Starsurge focuses on reliable quality, responsive service, and tailored solutions. As an authorized partner for leading networking brands, we deliver global logistics, custom software development, and multilingual support — helping clients build efficient, scalable, and dependable network infrastructure.

NVIDIA Quantum MQM8790-HS2R 200G InfiniBand Switch Unmanaged, 40-Port 16Tb/s, C2P Airflow UFM Ready 0
Key Facts At a Glance
SHARP™ Technology
Embedded
Collective offload
Port Splitting
40→80x100G
Double radix
Management
UFM / External SM
Unmanaged SKU
Airflow
C2P
Power‑to‑port
Compatibility Matrix
Component Supported Models / Types
Adapters NVIDIA ConnectX‑6, ConnectX‑7, BlueField‑2 / BlueField‑3 InfiniBand
Cables & Optics QSFP56 DAC (passive up to 3m, active up to 5m), AOC, optical transceivers (SR4, LR4)
Operating Systems Linux (RHEL 8/9, Ubuntu 20.04/22.04, Rocky Linux), Windows Server with InfiniBand stack
Management Platforms NVIDIA UFM, OpenSM, other standards‑compliant Subnet Managers
Topology Support Fat Tree, DragonFly+, 2D/3D Torus, SlimFly
Buyer Checklist
  • Confirm airflow direction: MQM8790-HS2R uses C2P (power‑to‑port) — verify rack cooling compatibility (cold air from rear).
  • Verify required port speed (200G native or 100G breakout) and cable assembly type.
  • Check power input: dual redundant AC with C13/C14 connectors.
  • Ensure rack depth supports 23.2 inches (standard depth).
  • Plan for external Subnet Manager: UFM license or OpenSM deployment required for fabric operation.
  • Validate UFM software licensing requirements if advanced telemetry features are needed.
Related Products
  • NVIDIA Unified Fabric Manager (UFM) Platform Licenses
  • NVIDIA Quantum QM9700 Series (NDR 400G InfiniBand)
  • NVIDIA ConnectX‑6 VPI Adapter Cards (100Gb/s Dual‑port)
  • NVIDIA BlueField‑3 DPU for infrastructure acceleration
  • Starsurge Custom Rack Integration Kits & QSFP56 Cables (passive/active)
Related Guides & Resources

Wilt u meer details over dit product weten
Ik ben geïnteresseerd NVIDIA Quantum MQM8790-HS2R 200G InfiniBand Switch Unmanaged, 40-Port 16Tb/s, C2P Airflow UFM Ready kun je me meer details sturen zoals type, maat, hoeveelheid, materiaal, etc.
Bedankt!
Wachten op je antwoord.