NVIDIA ConnectX-7 MCX755106AS-HEAT 400G InfiniBand Adapter – Single-Port NDR, PCIe 5.0, Hardware-Accelerated Security & Storage for Hyperscale Workloads
Productdetails:
| Merknaam: | Mellanox |
| Modelnummer: | MCX755106A-verwarming (900-9x7AH-0078-DTZ) |
| Document: | Connectx-7 infiniband.pdf |
Betalen & Verzenden Algemene voorwaarden:
| Min. bestelaantal: | 1 stks |
|---|---|
| Prijs: | Negotiate |
| Verpakking Details: | Buitenste doos |
| Levertijd: | Op basis van inventarisatie |
| Betalingscondities: | T/T |
| Levering vermogen: | Levering door project/batch |
|
Gedetailleerde informatie |
|||
| Model-NR.: | MCX755106A-verwarming (900-9x7AH-0078-DTZ) | Poorten: | 2-poort |
|---|---|---|---|
| Technologie: | Infiniband | Interfacetype: | OSFP56 |
| Specificatie: | 16,7 cm x 6,9 cm | Oorsprong: | India / Israël / China |
| transmissiesnelheid: | 200 gbe | Hostinterface: | Gen3 x16 |
| Markeren: | NVIDIA ConnectX-7 InfiniBand adapter,400G InfiniBand network card,PCIe 5.0 Mellanox adapter |
||
Productomschrijving
High-performance single-port 400Gb/s adapter for InfiniBand NDR and 400GbE networks—featuring PCIe 5.0 x16, inline hardware security (IPsec/TLS/MACsec), NVIDIA In-Network Computing engines, and NVMe-oF offloads for AI, HPC, and enterprise data centers.
- Single QSFP112 port supporting 400Gb/s InfiniBand (NDR) and 400/200/100/50/25/10GbE
- PCIe Gen 5.0 x16 (backward compatible with Gen 4.0/3.0) | Ultra-low latency and 215+ million messages/sec
- Hardware offloads: NVMe-oF target/initiator, XTS-AES 256/512-bit encryption, MPI tag matching
- Inline security engines: IPsec, TLS 1.3, MACsec with AES-GCM 128/256-bit
- PCIe half-height half-length (HHHL) form factor, RoHS compliant, advanced timing (PTP/SyncE)
- 400Gb/s Throughput: Single port operating at up to 400Gb/s InfiniBand (NDR) or 400GbE with full bidirectional bandwidth.
- In-Network Computing: Offloads collective operations (MPI, NCCL, SHMEM) using NVIDIA SHARP technology.
- Inline Security: Hardware encryption/decryption for IPsec, TLS 1.3, and MACsec at line rate; secure boot with root-of-trust.
- NVMe-oF Offloads: Target and initiator offloads for NVMe over Fabrics (including NVMe/TCP), reducing CPU utilization.
- Precision Timing: IEEE 1588v2 PTP with 12ns accuracy, SyncE, and configurable PPS in/out.
The MCX755106AS-HEAT integrates NVIDIA In-Network Computing engines (SHARP), RDMA (IBTA 1.5), RoCE, and NVMe-oF. It supports PCIe Gen 5.0 (x16), PAM4 (100G) and NRZ (10G/25G) SerDes, and advanced features like Dynamically Connected Transport (DCT), On-Demand Paging (ODP), and Adaptive Routing. Overlay offloads for VXLAN, GENEVE, NVGRE are hardware-accelerated. Compliant with IEEE 802.3ck, 802.3bj, and InfiniBand Trade Association specifications.
ConnectX-7 offloads communication, storage, and security tasks from the host CPU to the adapter hardware. For MPI collectives, the adapter processes data in transit using SHARP, reducing endpoint traffic. For storage, NVMe-oF commands are processed directly on the adapter, freeing CPU cores. Inline encryption engines (IPsec/TLS/MACsec) encrypt/decrypt packets at wire speed without CPU involvement. The result is lower latency, higher message rate, and improved application scalability—critical for 400G environments.
- AI Training Nodes: GPU-to-GPU communication with GPUDirect RDMA and NCCL collectives.
- HPC Compute Nodes: MPI-based simulations requiring ultra-low latency and high message rate.
- NVMe-oF Storage: Target/initiator offload for high-performance NVMe storage access.
- Secure Cloud Servers: Inline IPsec/TLS for multi-tenant security without CPU overhead.
- Financial Trading: Precision PTP timing for high-frequency trading and timestamping.
| Model | Ports & Speed | Host Interface | Form Factor | Security Offloads | Protocols | OPN |
|---|---|---|---|---|---|---|
| ConnectX-7 | 1x QSFP112 (400Gb/s NDR/400GbE) | PCIe 5.0 x16 | PCIe HHHL | IPsec, TLS 1.3, MACsec, AES-XTS | InfiniBand, Ethernet, NVMe-oF | MCX755106AS-HEAT |
| ConnectX-7 | 2x QSFP112 (400Gb/s) | PCIe 5.0 x16 | PCIe HHHL | IPsec/TLS/MACsec | IB/Eth | MCX75310AAS-NEAT |
| ConnectX-7 | 1x QSFP112 (200Gb/s) | PCIe 5.0 x16 | OCP 3.0 | IPsec/TLS/MACsec | IB/Eth | MCX755106AS-HEAT (OCP) |
Note: MCX755106AS-HEAT supports 400Gb/s InfiniBand (NDR) and 400/200/100/50/25/10GbE. Dimensions: 167.65mm x 68.90mm (HHHL). Includes tall and low-profile brackets. Power consumption < 20W typical.
- vs. ConnectX-6: Double the bandwidth (400Gb/s vs. 200Gb/s), PCIe 5.0, inline IPsec/TLS/MACsec, and advanced PTP with 12ns accuracy.
- vs. Competitor NICs: True hardware offload for NVMe-oF, MPI collectives, and full security suite—all at line rate.
- Single-Port Efficiency: Ideal for leaf nodes where dual-port is not required, reducing cost and power.
- Integrated Security: Eliminates need for external encryption appliances; FIPS compliance ready.
We offer 24/7 technical consultation, RMA services, and integration support for ConnectX-7 adapters. Each card is backed by a 1-year warranty (extendable). Our team provides driver validation for major Linux distributions (RHEL, Ubuntu), Windows Server, and VMware. Pre-sales configuration assistance for NDR InfiniBand fabric design is available. All cards are shipped from our 10M+ inventory with same-day dispatch.
A: Yes, it is fully interoperable with NVIDIA Quantum-2 QM9700/QM9790 switches using NDR mode at 400Gb/s.
A: Yes, it supports both InfiniBand and Ethernet protocols. The firmware auto-detects the switch type and configures the appropriate mode.
A: Yes, ConnectX-7 fully supports RoCE, providing low-latency RDMA in Ethernet environments.
A: Inline hardware engines for IPsec (AES-GCM 128/256), TLS 1.3, MACsec, and block-level XTS-AES 256/512-bit encryption. Also features secure boot with hardware root-of-trust.
A: Yes, it is backward compatible with PCIe Gen 4.0 and Gen 3.0 slots, though bandwidth will be limited to the slot's capability (approx. 200Gb/s in Gen 4.0).
- PCIe Slot Requirement: For full 400Gb/s performance, install in a PCIe Gen 5.0 x16 slot. Gen 4.0 slots will limit throughput to ~200Gb/s.
- Cooling: Ensure adequate airflow in server chassis; passive cooling requires minimum 300 LFM at 400G operation.
- Cabling: Use QSFP112 passive/active copper or optical modules rated for 400Gb/s (NDR).
- Driver Support: Use latest NVIDIA MLNX_OFED for Linux or WinOF-2 for Windows.
- Operating Temperature: 0°C to 70°C; store between -40°C and 85°C.
With over a decade of experience, we operate a large-scale factory backed by a strong technical team. Our extensive customer base and domain expertise enable us to offer competitive pricing without compromising on quality. As authorized distributors for Mellanox, Ruckus, Aruba, and Extreme, we stock original network switches, network card (nic card) solutions, wireless Access Points, controllers, and cabling. We maintain a 10 million USD inventory to ensure rapid fulfillment across diverse product lines. Every shipment is verified for accuracy, and we provide 24/7 consultation and technical support. Our professional sales and technical teams have earned a high reputation in global markets—partner with us for reliable infrastructure solutions.







