Talk:Link aggregation
![]() | This article is rated C-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | |||||||||||||
|
![]() | The contents of the Channel bonding page were merged into Link aggregation on 29 September 2018. For the contribution history and old versions of the redirected page, please see its history; for the discussion at that location, see its talk page. |
Redirects to this page
[edit]The following redirects have been created that link to this article:
- 802.3ad
- ethernet trunk
- ethernet trunking
- link aggregate group
- NIC teaming
- port trunking
- port teaming
Deleted this
[edit]- Some low-cost switches will typically have 24 or 48 10/100-mbit ports, and two additional gigabit ports for the backbone. The expected usage is that there is a 1-gigabit backbone, and the second gigabit port passes the backbone data along to the next switch in the network closet.
- While the two 1-gigabit ports may support operating as a single 2-gigabit trunk, there is no way for the switch to pass this 2-gigabit trunk along to additional switches. For a network with an expected maximum backbone speed of 2-gigabits, this is acceptable in a remote closet that can be fully served by the single switch with only 24 or 48 10/100-mbit ports. It is also acceptable if there a lot of switches in a closet and a single expensive switch can be used to manage all their uplinks.
We probably all know which switch he is describing and it should be patently obvious that if you are using this switch and decide to use BOTH of the gig ports for a downlink, then you won't have any ports left over to uplink. This has nothing to do with Link Aggregation and clearly not a limitation of it.
LACP
[edit]LACP is part of the 802.3ad specification, yet is not mentioned in this article. Anyone care to add it in the appropriate place? fonetikli 23:00, 20 July 2006 (UTC)
- Link Aggregation Control Protocol now redirects to Link aggregation#Link Aggregation Control Protocol ~Kvng (talk) 16:22, 14 April 2025 (UTC)
Packet Reordering
[edit]The page states "originally, link aggregation was developed to provide redundancy, and not bandwidth benefits". I don't find that in any of the cited material. Moreover, I don't believe that it's strictly true. Is this an inference based on the load-balancing behavior? Did somebody assume that since a single flow tops out at the speed of a single aggregated device that this was "developed to provide redundancy" rather than "bandwidth"?
The Linux bonding.txt documentation (linked from this article) hints at the real reason starting at line 1784. Basically, TCP (and most reordering protocols) don't handle packet reordering well. TCP detects the out-of-order packets as lost packets (typically signifying drops due to congestion) and can generate spurious, unnecessary retransmits. IP fragment-handling gets even more pathological (worst case, it becomes O(N) where N is the number of fragments).
I think that aggregation was originally not designed to provide redundancy but just aggregate bandwidth. The common defaults for interval time of LACP are pretty low for timely failover. Most aggregation systems can be configured statically without LACP as well. Rather, it seems that they were just designed to aggregate and the port-selection algorithms grew this way because they had to (i.e. straight round-robin hurt performance so much that it was required). All of this seems to indicate the opposite of what the article suggests, although without any original material to cite, I don't know if we should even take a position on this.
So, should we reword this bit to be a bit more neutral?--Jayson Vantuyl (talk) 14:36, 29 November 2008 (UTC)
- The article describes both bandwidth and reliablity motiviations in a neutral way in Link_aggregation#Motivation. There is a WP:PRIMARY source (IEEE design presentations) cited which appears to support both. ~Kvng (talk) 16:29, 14 April 2025 (UTC)
Maybe I'm confused
[edit]But in the past I've worked with port trunking and it's different from link aggregation. Port trunking assigns the same IP to all the ports on the trunk, and you need a special switch to connect to (or to go port to port between two systems) for port trunking to work as you cannot have multiple ports on a network with the same IP.
With (Linux) port aggregation the ports have different IPs and you do not need a trunking switch or any special equipment.
Now in my Linux bandwidth testing I found that aggregating two e1000 ports resulted in no increase of bandwidth, which I found surprising, but from the article text it seems that all traffic from a single session will flow down one wire. With muliple sessions the bandwidth should be more fully usable, but I did not test in that configuration. The trunking tests I did were with Solaris Sparc and I was CPU limited and unable to determine if there was a net bandwidth gain as I barely had the CPU to saturate one port.
I guess I need to look up the specification and see what it says. But from what I know, port trunking is something different (and I don't think it really caught on). Then again, maybe I'm just confused about how things are named.
76.254.74.81 (talk) 17:46, 19 August 2009 (UTC)Rich
- Link aggregation operates at layer 2 and below so IP addresses are not involved. ~Kvng (talk) 16:31, 14 April 2025 (UTC)
Linux part, out of context?
[edit]There is no introduction to this part at all. What is it about really and why such long part about one type of LACP implementation? Why is the Linux LACP so important to note in a general article about LACP, when not for example LACP config modes on Cisco or HP Procurve switches is mentioned? There is also words used ("slaves") which is not defined at all. —Preceding unsigned comment added by 212.73.30.50 (talk) 13:18, 3 March 2011 (UTC)
very much agree, neutral info would be a lot better. noone would want a section about Tru64 LAGG either. I was looking for some LACP info and quickly skipped over the article to the IEEE doc as the relevant info (lacpdu rates for fast and slow) was missing in favor of OS-specific stuff. —Preceding unsigned comment added by 188.174.67.0 (talk) 23:58, 24 April 2011 (UTC)
- The Linux-specific content is largely confined to Link_aggregation#Linux_drivers. There may be an WP:UNDUE issue that can be improved by WP:SPLIT or other reorganization or by reducing the size of this section. ~Kvng (talk) 16:35, 14 April 2025 (UTC)
SMLT and other stacking/multichassis technologies
[edit]SMLT is proprietary to Nortel. Kudos to them for being the first, and for a while, only, vendor with a solution but they no longer stand alone here. Cisco has at least three technologies (MEC - 6500 series, CrossStack EtherChannel (3750) and VPC (Nexus)as mentioned above), and even 3Com (DLA - 5500/5500G) and D-Link have (or had) offerings.
Point is, if we're going to include SMLT, we should include others. (I vote for listing others as well.) Titaniumlegs (talk) 23:04, 30 September 2011 (UTC)
- Many are now listed in Link_aggregation#Proprietary_link_aggregation. Multi-chassis link aggregation group is also linked earlier in the article. ~Kvng (talk) 16:39, 14 April 2025 (UTC)
Use of acronym LAG not clear
[edit]This article uses the acronym LAG multiple times but never spells it out. If you search for "Link Aggregate Group" using Wikipedia search, it sends you to this page, but searching for "Link Aggregation Group" returns a list of possibilities that include this page. The phrase "link aggregation group" is used on this page. According to www.acronymfinder.com the IEEE standard reference is "Link Aggregation Group." This article would be clearer if it showed what the acronym stood for. — Preceding unsigned comment added by 108.52.128.210 (talk) 15:21, 25 July 2012 (UTC)
- This appears to have been addressed. Link aggregate group and Link aggregation group both redirect here. The LAG acronym is defined in the lead. ~Kvng (talk) 16:43, 14 April 2025 (UTC)