Google

Ads by Adbrite

Your Ad Here

Tuesday, September 18, 2007

Related standards

  • Networking standards that are not part of the IEEE 802.3 Ethernet standard, but support the Ethernet frame format, and are capable of interoperating with it.
    • LattisNet — A SynOptics pre-standard twisted-pair 10 Mbit/s variant.
    • 100BaseVG — An early contender for 100 Mbit/s Ethernet. It runs over Category 3 cabling. Uses four pairs. Commercial failure.
    • TIA 100BASE-SX — Promoted by the Telecommunications Industry Association. 100BASE-SX is an alternative implementation of 100 Mbit/s Ethernet over fiber; it is incompatible with the official 100BASE-FX standard. Its main feature is interoperability with 10BASE-FL, supporting autonegotiation between 10 Mbit/s and 100 Mbit/s operation – a feature lacking in the official standards due to the use of differing LED wavelengths. It is targeted at the installed base of 10 Mbit/s fiber network installations.
    • TIA 1000BASE-TX — Promoted by the Telecommunications Industry Association, it was a commercial failure, and no products exist. 1000BASE-TX uses a simpler protocol than the official 1000BASE-T standard so the electronics can be cheaper, but requires Category 6 cabling.
  • Networking standards that do not use the Ethernet frame format but can still be connected to Ethernet using MAC-based bridging.
    • 802.11 — A standard for wireless networking often paired with an Ethernet backbone.
  • 10BaseS — Ethernet over VDSL
  • Long Reach Ethernet
  • Avionics Full-Duplex Switched Ethernet
  • Metro Ethernet

Physical layer

The first Ethernet networks, 10BASE5, used thick yellow cable with vampire taps as a shared medium (using CSMA/CD). Later, 10BASE2 Ethernet used thinner coaxial cable (with BNC connectors) as the shared CSMA/CD medium. The later StarLAN 1BASE5 and 10BASE-T used twisted pair connected to Ethernet hubs with 8P8C modular connectors (not to be confused with FCC's RJ45).

Currently Ethernet has many varieties that vary both in speed and physical medium used. Perhaps the most common forms used are 10BASE-T, 100BASE-TX, and 1000BASE-T. All three utilize twisted pair cables and 8P8C modular connectors (often called RJ45). They run at 10 Mbit/s, 100 Mbit/s, and 1 Gbit/s, respectively. However each version has become steadily more selective about the cable it runs on and some installers have avoided 1000BASE-T for everything except short connections to servers.

Fiber optic variants of Ethernet are commonly seen connecting buildings or network cabinets in different parts of a building but are rarely seen connected to end systems for cost reasons. Their advantages lie in performance, electrical isolation and distance, up to tens of kilometers with some versions. Fiber versions of a new speed almost invariably come out before copper. 10 gigabit Ethernet is becoming more popular in both enterprise and carrier networks, with development starting on 100G Ethernet.

Through Ethernet's history there have also been RF versions of Ethernet, both wireline and wireless. The currently recommended RF wireless networking standards, 802.11 and 802.16, are not Ethernet, in that they do not use the Ethernet link-layer header, and use control and management packet types that don't exist in Ethernet – it would not be simply a matter of modulation to transmit Ethernet packets on an 802.11 or 802.16 network, or to transmit 802.11 or 802.16 packets on an Ethernet network.

Ethernet frame types and the EtherType field



Frames are the format of data packets on the wire. Note that a frame viewed on the actual physical hardware would show start bits, sometimes called the preamble, and the trailing Frame Check Sequence. These are required by all physical hardware and is seen in all four following frame types. They are not displayed by packet sniffing software because these bits are removed by the Ethernet adapter before being passed on to the network protocol stack software.

There are several types of Ethernet frames:

In addition, Ethernet frames may optionally contain a IEEE 802.1Q tag to identify what VLAN it belongs to and its IEEE 802.1p priority (quality of service). This doubles the potential number of frame types.

The different frame types have different formats and MTU values, but can coexist on the same physical medium.


Versions 1.0 and 2.0 of the Digital/Intel/Xerox (DIX) Ethernet specification have a 16-bit sub-protocol label field called the EtherType. The original IEEE 802.3 Ethernet specification replaced that with a 16-bit length field, with the MAC header followed by an IEEE 802.2 logical link control (LLC) header; the maximum length of a packet was 1500 bytes. The two formats were eventually unified by the convention that values of that field between 0 and 1500 indicated the use of the original 802.3 Ethernet format with a length field, while values of 1536 decimal (0600 hexadecimal) and greater indicated the use of the DIX frame format with an EtherType sub-protocol identifier.[4] This convention allows software to determine whether a frame is an Ethernet II frame or an IEEE 802.3 frame, allowing the coexistence of both standards on the same physical medium. See also Jumbo Frames.

By examining the 802.2 LLC header, it is possible to determine whether it is followed by a SNAP (subnetwork access protocol) header. Some protocols, particularly those designed for the OSI networking stack, operate directly on top of 802.2 LLC, which provides both datagram and connection-oriented network services. The LLC header includes two additional eight-bit address fields, called service access points or SAPs in OSI terminology; when both source and destination SAP are set to the value 0xAA, the SNAP service is requested. The SNAP header allows EtherType values to be used with all IEEE 802 protocols, as well as supporting private protocol ID spaces. In IEEE 802.3x-1997, the IEEE Ethernet standard was changed to explicitly allow the use of the 16-bit field after the MAC addresses to be used as a length field or a type field.

Novell's "raw" 802.3 frame format was based on early IEEE 802.3 work. Novell used this as a starting point to create the first implementation of its own IPX Network Protocol over Ethernet. They did not use any LLC header but started the IPX packet directly after the length field. This does not conform to the IEEE 802.3 standard, but since IPX has always FF at the first two bytes (while in IEEE 802.2 LLC that pattern is theoretically possible but extremely unlikely), in practice this mostly coexists on the wire with other Ethernet implementations, with the notable exception of some early forms of DECnet which got confused by this.

Novell NetWare used this frame type by default until the mid nineties, and since Netware was very widespread back then, while IP was not, at some point in time most of the world's Ethernet traffic ran over "raw" 802.3 carrying IPX. Since Netware 4.10 Netware now defaults to IEEE 802.2 with LLC (Netware Frame Type Ethernet_802.2) when using IPX. (See "Ethernet Framing" in References for details.)

Mac OS uses 802.2/SNAP framing for the AppleTalk V2 protocol suite on Ethernet ("EtherTalk") and Ethernet II framing for TCP/IP.

The 802.2 variants of Ethernet are not in widespread use on common networks currently, with the exception of large corporate Netware installations that have not yet migrated to Netware over IP. In the past, many corporate networks supported 802.2 Ethernet to support transparent translating bridges between Ethernet and IEEE 802.5 Token Ring or FDDI networks. The most common framing type used today is Ethernet Version 2, as it is used by most Internet Protocol-based networks, with its EtherType set to 0x0800 for IPv4 and 0x86DD for IPv6.

There exists an Internet standard for encapsulating IP version 4 traffic in IEEE 802.2 frames with LLC/SNAP headers.[5] It is almost never implemented on Ethernet (although it is used on FDDI and on token ring, IEEE 802.11, and other IEEE 802 networks). IP traffic can not be encapsulated in IEEE 802.2 LLC frames without SNAP because, although there is an LLC protocol type for IP, there is no LLC protocol type for ARP. IP Version 6 can also be transmitted over Ethernet using IEEE 802.2 with LLC/SNAP, but, again, that's almost never used (although LLC/SNAP encapsulation of IPv6 is used on IEEE 802 networks).

The IEEE 802.1Q tag, if present, is placed between the Source Address and the EtherType or Length fields. The first two bytes of the tag are the Tag Protocol Identifier (TPID) value of 0x8100. This is located in the same place as the EtherType/Length field in untagged frames, so an EtherType value of 0x8100 means the frame is tagged, and the true EtherType/Length is located after the tag. The TPID is followed by two bytes containing the Tag Control Information (TCI) (the IEEE 802.1p priority (quality of service) and VLAN id). The tag is followed by the rest of the frame, using one of the types described above.

Autonegotiation and duplex mismatch

The autonegotiation standard does not allow autodetection to detect duplex setting if the other computer is not also set to Autonegotation. When two interfaces are connected and set to different "duplex" modes, the effect of the duplex mismatch is a network that works, but much slower than at its nominal speed. The primary rule for avoiding this is that you must not set one end of a connection to a forced full duplex setting and the other end to autonegotiation.

Many different modes of operations (10BASE-T half duplex, 10BASE-T full duplex, 100BASE-TX half duplex, …) exist for Ethernet over twisted pair cable using 8P8C modular connectors (not to be confused with FCC's RJ45), and most devices are capable of different modes of operations. In 1995, a standard was released for allowing two network interfaces connected to each other to autonegotiate the best possible shared mode of operation. This works well for the case of every device being set to autonegotiate. The autonegotiation standard contained a mechanism for detecting the speed but not the duplex setting of Ethernet peers that did not use autonegotiation.

Interoperability problems lead network administrators to manually set the mode of operation of interfaces on network devices. What would happen is that some device would fail to autonegotiate and therefore had to be set into one setting or another. This often led to duplex setting mismatches: in particular, when two interfaces are connected to each other with one set to autonegotiation and one set to full duplex mode, a duplex mismatch results because the autonegotiation process fails and half duplex is assumed – the interface in full duplex mode then transmits at the same time as receiving, and the interface in half duplex mode then gives up on transmitting a packet. The interface in half duplex mode is not ready to receive a packet, so it signals a clash, and tranmissions are halted, for amounts of time based on backoff (random wait times) algorithms. When both packets start trying to transmit again, they interfere again and the backoff strategy may result in a longer and longer wait time before attempting to transmit again; eventually a transmission succeeds but this then causes the flood and collisions to resume.

Because of the wait times, the effect of a duplex mismatch is a network that is not completely 'broken' but is incredibly slow.

More advanced networks

Simple switched Ethernet networks, while an improvement over hub based Ethernet, suffer from a number of issues:

  • They suffer from single points of failure. If any link fails some devices will be unable to communicate with other devices and if the link that fails is in a central location lots of users can be cut off from the resources they require.
  • It is possible to trick switches or hosts into sending data to your machine even if it's not intended for it, as indicated above.
  • Large amounts of broadcast traffic whether malicious, accidental or simply a side effect of network size can flood slower links and/or systems.
    • It is possible for any host to flood the network with broadcast traffic forming a denial of service attack against any hosts that run at the same or lower speed as the attacking device.
    • As the network grows normal broadcast traffic takes up an ever greater amount of bandwidth.
    • If switches are not multicast aware multicast traffic will end up treated like broadcast traffic due to being directed at a MAC with no associated port.
    • If switches discover more MAC addresses than they can store (either through network size or through an attack) some addresses must inevitably be dropped and traffic to those addresses will be treated the same way as traffic to unknown addresses, that is essentially the same as broadcast traffic (this issue is known as failopen).
  • They suffer from bandwidth choke points where a lot of traffic is forced down a single link.

Some switches offer a variety of tools to combat these issues including:

  • Spanning-tree protocol to maintain the active links of the network as a tree while allowing physical loops for redundancy.
  • Various port protection features, as it is far more likely an attacker will be on an end system port than on a switch-switch link.
  • VLANs to keep different classes of users separate while using the same physical infrastructure.
  • fast routing at higher levels to route between those VLANs.
  • Link aggregation to add bandwidth to overloaded links and to provide some measure of redundancy, although the links won't protect against switch failure because they connect the same pair of switches.

Dual speed hubs

In the early days of Fast Ethernet, Ethernet switches were relatively expensive devices. However, hubs suffered from the problem that if there were any 10BASE-T devices connected then the whole system would have to run at 10 Mbit. Therefore a compromise between a hub and a switch appeared known as a dual speed hub. These devices consisted of an internal two-port switch, dividing the 10BASE-T (10 Mbit) and 100BASE-T (100 Mbit) segments. The device would typically consist of more than two physical ports. When a network device becomes active on any of the physical ports, the device attaches it to either the 10BASE-T segment or the 100BASE-T segment, as appropriate. This prevented the need for an all-or-nothing migration from 10BASE-T to 100BASE-T networks. These devices are often known as dual-speed hubs, since the traffic between devices connected at the same speed is not switched.

Bridging and switching

While repeaters could isolate some aspects of Ethernet segments, such as cable breakages, they still forwarded all traffic to all Ethernet devices. This created practical limits on how many machines could communicate on an Ethernet network. Also as the entire network was one collision domain and all hosts had to be able to detect collisions anywhere on the network the number of repeaters between the furthest nodes was limited. Finally segments joined by repeaters had to all operate at the same speed, making phased in upgrades impossible

To alleviate these problems, bridging was created to communicate at the data link layer while isolating the physical layer. With bridging, only well-formed packets are forwarded from one Ethernet segment to another; collisions and packet errors are isolated. Bridges learn where devices are, by watching MAC addresses, and do not forward packets across segments when they know the destination address is not located in that direction.

Prior to discovery of network devices on the different segments, Ethernet bridges and switches work somewhat like Ethernet hubs, passing all traffic between segments. However, as the switch discovers the addresses associated with each port, it only forwards network traffic to the necessary segments improving overall performance. Broadcast traffic is still forwarded to all network segments. Bridges also overcame the limits on total segments between two hosts and allowed the mixing of speeds, both of which became very important with the introduction of Fast Ethernet.

Early bridges examined each packet one by one using software on a CPU, and some of them were significantly slower than hubs (multi-port repeaters) at forwarding traffic, especially when handling many ports at the same time. In 1989 the networking company Kalpana introduced their EtherSwitch, the first Ethernet switch. An Ethernet switch does bridging in hardware, allowing it to forward packets at full wire speed. It is important to remember that the term switch was invented by device manufacturers and does not appear in the 802.3 standard. Functionally, the two terms are interchangeable.

Since packets are typically only delivered to the port they are intended for, traffic on a switched Ethernet is slightly less public than on shared-medium Ethernet. Despite this, switched Ethernet should still be regarded as an insecure network technology, because it is easy to subvert switched Ethernet systems by means such as ARP spoofing and MAC flooding. The bandwidth advantages, the slightly better isolation of devices from each other, the ability to easily mix different speeds of device and the elimination of the chaining limits inherent in non-switched Ethernet have made switched Ethernet the dominant network technology.

When a twisted pair or fiber link segment is used and neither end is connected to a hub, full-duplex Ethernet becomes possible over that segment. In full duplex mode both devices can transmit and receive to/from each other at the same time, and there is no collision domain. This doubles the aggregate bandwidth of the link and is sometimes advertised as double the link speed (e.g. 200 Mbit/s) to account for this. However, this is misleading as performance will only double if traffic patterns are symmetrical (which in reality they rarely are). The elimination of the collision domain also means that all the link's bandwidth can be used and that segment length is not limited by the need for correct collision detection (this is most significant with some of the fiber variants of Ethernet).