- Networking standards that are not part of the IEEE 802.3 Ethernet standard, but support the Ethernet frame format, and are capable of interoperating with it.
- LattisNet — A SynOptics pre-standard twisted-pair 10 Mbit/s variant.
- 100BaseVG — An early contender for 100 Mbit/s Ethernet. It runs over Category 3 cabling. Uses four pairs. Commercial failure.
- TIA 100BASE-SX — Promoted by the Telecommunications Industry Association. 100BASE-SX is an alternative implementation of 100 Mbit/s Ethernet over fiber; it is incompatible with the official 100BASE-FX standard. Its main feature is interoperability with 10BASE-FL, supporting autonegotiation between 10 Mbit/s and 100 Mbit/s operation – a feature lacking in the official standards due to the use of differing LED wavelengths. It is targeted at the installed base of 10 Mbit/s fiber network installations.
- TIA 1000BASE-TX — Promoted by the Telecommunications Industry Association, it was a commercial failure, and no products exist. 1000BASE-TX uses a simpler protocol than the official 1000BASE-T standard so the electronics can be cheaper, but requires Category 6 cabling.
- Networking standards that do not use the Ethernet frame format but can still be connected to Ethernet using MAC-based bridging.
- 802.11 — A standard for wireless networking often paired with an Ethernet backbone.
- 10BaseS — Ethernet over VDSL
- Long Reach Ethernet
- Avionics Full-Duplex Switched Ethernet
- Metro Ethernet
Tuesday, September 18, 2007
Related standards
Physical layer
The first Ethernet networks, 10BASE5, used thick yellow cable with vampire taps as a shared medium (using CSMA/CD). Later, 10BASE2 Ethernet used thinner coaxial cable (with BNC connectors) as the shared CSMA/CD medium. The later StarLAN 1BASE5 and 10BASE-T used twisted pair connected to Ethernet hubs with 8P8C modular connectors (not to be confused with FCC's RJ45).
Currently Ethernet has many varieties that vary both in speed and physical medium used. Perhaps the most common forms used are 10BASE-T, 100BASE-TX, and 1000BASE-T. All three utilize twisted pair cables and 8P8C modular connectors (often called RJ45). They run at 10 Mbit/s, 100 Mbit/s, and 1 Gbit/s, respectively. However each version has become steadily more selective about the cable it runs on and some installers have avoided 1000BASE-T for everything except short connections to servers.
Fiber optic variants of Ethernet are commonly seen connecting buildings or network cabinets in different parts of a building but are rarely seen connected to end systems for cost reasons. Their advantages lie in performance, electrical isolation and distance, up to tens of kilometers with some versions. Fiber versions of a new speed almost invariably come out before copper. 10 gigabit Ethernet is becoming more popular in both enterprise and carrier networks, with development starting on 100G Ethernet.
Through Ethernet's history there have also been RF versions of Ethernet, both wireline and wireless. The currently recommended RF wireless networking standards, 802.11 and 802.16, are not Ethernet, in that they do not use the Ethernet link-layer header, and use control and management packet types that don't exist in Ethernet – it would not be simply a matter of modulation to transmit Ethernet packets on an 802.11 or 802.16 network, or to transmit 802.11 or 802.16 packets on an Ethernet network.
Ethernet frame types and the EtherType field
There are several types of Ethernet frames:
- The Ethernet Version 2 or Ethernet II frame, the so-called DIX frame (named after DEC, Intel, and Xerox); this is the most common today, as it is often used directly by the Internet Protocol.
- Novell's non-standard variation of IEEE 802.3 ("raw 802.3 frame") without an IEEE 802.2 LLC header.
- IEEE 802.2 LLC frame
- IEEE 802.2 LLC/SNAP frame
In addition, Ethernet frames may optionally contain a IEEE 802.1Q tag to identify what VLAN it belongs to and its IEEE 802.1p priority (quality of service). This doubles the potential number of frame types.
The different frame types have different formats and MTU values, but can coexist on the same physical medium.
Versions 1.0 and 2.0 of the Digital/Intel/Xerox (DIX) Ethernet specification have a 16-bit sub-protocol label field called the EtherType. The original IEEE 802.3 Ethernet specification replaced that with a 16-bit length field, with the MAC header followed by an IEEE 802.2 logical link control (LLC) header; the maximum length of a packet was 1500 bytes. The two formats were eventually unified by the convention that values of that field between 0 and 1500 indicated the use of the original 802.3 Ethernet format with a length field, while values of 1536 decimal (0600 hexadecimal) and greater indicated the use of the DIX frame format with an EtherType sub-protocol identifier.[4] This convention allows software to determine whether a frame is an Ethernet II frame or an IEEE 802.3 frame, allowing the coexistence of both standards on the same physical medium. See also Jumbo Frames.
By examining the 802.2 LLC header, it is possible to determine whether it is followed by a SNAP (subnetwork access protocol) header. Some protocols, particularly those designed for the OSI networking stack, operate directly on top of 802.2 LLC, which provides both datagram and connection-oriented network services. The LLC header includes two additional eight-bit address fields, called service access points or SAPs in OSI terminology; when both source and destination SAP are set to the value 0xAA, the SNAP service is requested. The SNAP header allows EtherType values to be used with all IEEE 802 protocols, as well as supporting private protocol ID spaces. In IEEE 802.3x-1997, the IEEE Ethernet standard was changed to explicitly allow the use of the 16-bit field after the MAC addresses to be used as a length field or a type field.
Novell's "raw" 802.3 frame format was based on early IEEE 802.3 work. Novell used this as a starting point to create the first implementation of its own IPX Network Protocol over Ethernet. They did not use any LLC header but started the IPX packet directly after the length field. This does not conform to the IEEE 802.3 standard, but since IPX has always FF at the first two bytes (while in IEEE 802.2 LLC that pattern is theoretically possible but extremely unlikely), in practice this mostly coexists on the wire with other Ethernet implementations, with the notable exception of some early forms of DECnet which got confused by this.
Novell NetWare used this frame type by default until the mid nineties, and since Netware was very widespread back then, while IP was not, at some point in time most of the world's Ethernet traffic ran over "raw" 802.3 carrying IPX. Since Netware 4.10 Netware now defaults to IEEE 802.2 with LLC (Netware Frame Type Ethernet_802.2) when using IPX. (See "Ethernet Framing" in References for details.)
Mac OS uses 802.2/SNAP framing for the AppleTalk V2 protocol suite on Ethernet ("EtherTalk") and Ethernet II framing for TCP/IP.
The 802.2 variants of Ethernet are not in widespread use on common networks currently, with the exception of large corporate Netware installations that have not yet migrated to Netware over IP. In the past, many corporate networks supported 802.2 Ethernet to support transparent translating bridges between Ethernet and IEEE 802.5 Token Ring or FDDI networks. The most common framing type used today is Ethernet Version 2, as it is used by most Internet Protocol-based networks, with its EtherType set to 0x0800 for IPv4 and 0x86DD for IPv6.
There exists an Internet standard for encapsulating IP version 4 traffic in IEEE 802.2 frames with LLC/SNAP headers.[5] It is almost never implemented on Ethernet (although it is used on FDDI and on token ring, IEEE 802.11, and other IEEE 802 networks). IP traffic can not be encapsulated in IEEE 802.2 LLC frames without SNAP because, although there is an LLC protocol type for IP, there is no LLC protocol type for ARP. IP Version 6 can also be transmitted over Ethernet using IEEE 802.2 with LLC/SNAP, but, again, that's almost never used (although LLC/SNAP encapsulation of IPv6 is used on IEEE 802 networks).
The IEEE 802.1Q tag, if present, is placed between the Source Address and the EtherType or Length fields. The first two bytes of the tag are the Tag Protocol Identifier (TPID) value of 0x8100. This is located in the same place as the EtherType/Length field in untagged frames, so an EtherType value of 0x8100 means the frame is tagged, and the true EtherType/Length is located after the tag. The TPID is followed by two bytes containing the Tag Control Information (TCI) (the IEEE 802.1p priority (quality of service) and VLAN id). The tag is followed by the rest of the frame, using one of the types described above.
Autonegotiation and duplex mismatch
The autonegotiation standard does not allow autodetection to detect duplex setting if the other computer is not also set to Autonegotation. When two interfaces are connected and set to different "duplex" modes, the effect of the duplex mismatch is a network that works, but much slower than at its nominal speed. The primary rule for avoiding this is that you must not set one end of a connection to a forced full duplex setting and the other end to autonegotiation.
Many different modes of operations (10BASE-T half duplex, 10BASE-T full duplex, 100BASE-TX half duplex, …) exist for Ethernet over twisted pair cable using 8P8C modular connectors (not to be confused with FCC's RJ45), and most devices are capable of different modes of operations. In 1995, a standard was released for allowing two network interfaces connected to each other to autonegotiate the best possible shared mode of operation. This works well for the case of every device being set to autonegotiate. The autonegotiation standard contained a mechanism for detecting the speed but not the duplex setting of Ethernet peers that did not use autonegotiation.
Interoperability problems lead network administrators to manually set the mode of operation of interfaces on network devices. What would happen is that some device would fail to autonegotiate and therefore had to be set into one setting or another. This often led to duplex setting mismatches: in particular, when two interfaces are connected to each other with one set to autonegotiation and one set to full duplex mode, a duplex mismatch results because the autonegotiation process fails and half duplex is assumed – the interface in full duplex mode then transmits at the same time as receiving, and the interface in half duplex mode then gives up on transmitting a packet. The interface in half duplex mode is not ready to receive a packet, so it signals a clash, and tranmissions are halted, for amounts of time based on backoff (random wait times) algorithms. When both packets start trying to transmit again, they interfere again and the backoff strategy may result in a longer and longer wait time before attempting to transmit again; eventually a transmission succeeds but this then causes the flood and collisions to resume.
Because of the wait times, the effect of a duplex mismatch is a network that is not completely 'broken' but is incredibly slow.
More advanced networks
Simple switched Ethernet networks, while an improvement over hub based Ethernet, suffer from a number of issues:
- They suffer from single points of failure. If any link fails some devices will be unable to communicate with other devices and if the link that fails is in a central location lots of users can be cut off from the resources they require.
- It is possible to trick switches or hosts into sending data to your machine even if it's not intended for it, as indicated above.
- Large amounts of broadcast traffic whether malicious, accidental or simply a side effect of network size can flood slower links and/or systems.
- It is possible for any host to flood the network with broadcast traffic forming a denial of service attack against any hosts that run at the same or lower speed as the attacking device.
- As the network grows normal broadcast traffic takes up an ever greater amount of bandwidth.
- If switches are not multicast aware multicast traffic will end up treated like broadcast traffic due to being directed at a MAC with no associated port.
- If switches discover more MAC addresses than they can store (either through network size or through an attack) some addresses must inevitably be dropped and traffic to those addresses will be treated the same way as traffic to unknown addresses, that is essentially the same as broadcast traffic (this issue is known as failopen).
- They suffer from bandwidth choke points where a lot of traffic is forced down a single link.
Some switches offer a variety of tools to combat these issues including:
- Spanning-tree protocol to maintain the active links of the network as a tree while allowing physical loops for redundancy.
- Various port protection features, as it is far more likely an attacker will be on an end system port than on a switch-switch link.
- VLANs to keep different classes of users separate while using the same physical infrastructure.
- fast routing at higher levels to route between those VLANs.
- Link aggregation to add bandwidth to overloaded links and to provide some measure of redundancy, although the links won't protect against switch failure because they connect the same pair of switches.
Dual speed hubs
Bridging and switching
While repeaters could isolate some aspects of Ethernet segments, such as cable breakages, they still forwarded all traffic to all Ethernet devices. This created practical limits on how many machines could communicate on an Ethernet network. Also as the entire network was one collision domain and all hosts had to be able to detect collisions anywhere on the network the number of repeaters between the furthest nodes was limited. Finally segments joined by repeaters had to all operate at the same speed, making phased in upgrades impossible
To alleviate these problems, bridging was created to communicate at the data link layer while isolating the physical layer. With bridging, only well-formed packets are forwarded from one Ethernet segment to another; collisions and packet errors are isolated. Bridges learn where devices are, by watching MAC addresses, and do not forward packets across segments when they know the destination address is not located in that direction.
Prior to discovery of network devices on the different segments, Ethernet bridges and switches work somewhat like Ethernet hubs, passing all traffic between segments. However, as the switch discovers the addresses associated with each port, it only forwards network traffic to the necessary segments improving overall performance. Broadcast traffic is still forwarded to all network segments. Bridges also overcame the limits on total segments between two hosts and allowed the mixing of speeds, both of which became very important with the introduction of Fast Ethernet.
Early bridges examined each packet one by one using software on a CPU, and some of them were significantly slower than hubs (multi-port repeaters) at forwarding traffic, especially when handling many ports at the same time. In 1989 the networking company Kalpana introduced their EtherSwitch, the first Ethernet switch. An Ethernet switch does bridging in hardware, allowing it to forward packets at full wire speed. It is important to remember that the term switch was invented by device manufacturers and does not appear in the 802.3 standard. Functionally, the two terms are interchangeable.
Since packets are typically only delivered to the port they are intended for, traffic on a switched Ethernet is slightly less public than on shared-medium Ethernet. Despite this, switched Ethernet should still be regarded as an insecure network technology, because it is easy to subvert switched Ethernet systems by means such as ARP spoofing and MAC flooding. The bandwidth advantages, the slightly better isolation of devices from each other, the ability to easily mix different speeds of device and the elimination of the chaining limits inherent in non-switched Ethernet have made switched Ethernet the dominant network technology.
When a twisted pair or fiber link segment is used and neither end is connected to a hub, full-duplex Ethernet becomes possible over that segment. In full duplex mode both devices can transmit and receive to/from each other at the same time, and there is no collision domain. This doubles the aggregate bandwidth of the link and is sometimes advertised as double the link speed (e.g. 200 Mbit/s) to account for this. However, this is misleading as performance will only double if traffic patterns are symmetrical (which in reality they rarely are). The elimination of the collision domain also means that all the link's bandwidth can be used and that segment length is not limited by the need for correct collision detection (this is most significant with some of the fiber variants of Ethernet).
Ethernet repeaters and hubs
A greater length could be obtained by an Ethernet repeater, which took the signal from one Ethernet cable and repeated it onto another cable. If a collision was detected, the repeater transmitted a jam signal onto all ports to ensure collision detection. Repeaters could be used to connect segments such that there were up to five Ethernet segments between any two hosts, three of which could have attached devices. Repeaters could detect an improperly terminated link from the continuous collisions and stop forwarding data from it. Hence they alleviated the problem of cable breakages: when an Ethernet coax segment broke, while all devices on that segment were unable to communicate, repeaters allowed the other segments to continue working, although depending on which segment was broken and the layout of the network the partitioning that resulted may have made other segments unable to reach important servers and thus effectively useless.
People recognized the advantages of cabling in a star topology, primarily that only faults at the star point will result in a badly partitioned network, and network vendors started creating repeaters having multiple ports, thus reducing the number of repeaters required at the star point. Multiport Ethernet repeaters became known as "hubs". Network vendors such as DEC and SynOptics sold hubs that connected many 10BASE2 thin coaxial segments. There were also "multi-port transceivers" or "fan-outs". These could be connected to each other and/or a coax backbone. The best-known early example was DEC's DELNI. These devices allowed multiple hosts with AUI connections to share a single transceiver. They also allowed creation of a small standalone Ethernet segment without using a coaxial cable.
Ethernet on unshielded twisted-pair cables (UTP), beginning with StarLAN and continuing with 10BASE-T, was designed for point-to-point links only and all termination was built into the device. This changed hubs from a specialist device used at the center of large networks to a device that every twisted pair-based network with more than two machines had to use. The tree structure that resulted from this made Ethernet networks more reliable by preventing faults with (but not deliberate misbehavior of) one peer or its associated cable from affecting other devices on the network, although a failure of a hub or an inter-hub link could still affect lots of users. Also, since twisted pair Ethernet is point-to-point and terminated inside the hardware, the total empty panel space required around a port is much reduced, making it easier to design hubs with lots of ports and to integrate Ethernet onto computer motherboards.
Despite the physical star topology, hubbed Ethernet networks still use half-duplex and CSMA/CD, with only minimal activity by the hub, primarily the Collision Enforcement signal, in dealing with packet collisions. Every packet is sent to every port on the hub, so bandwidth and security problems aren't addressed. The total throughput of the hub is limited to that of a single link and all links must operate at the same speed.
Collisions reduce throughput by their very nature. In the worst case, when there are lots of hosts with long cables that attempt to transmit many short frames, excessive collisions can reduce throughput dramatically. However, a Xerox report in 1980 summarized the results of having 20 fast nodes attempting to transmit packets of various sizes as quickly as possible on the same Ethernet segment.[2] The results showed that, even for minimal Ethernet frames (64B), 90% throughput on the LAN was the norm. This is in comparison with token passing LANs (token ring, token bus), all of which suffer throughput degradation as each new node comes into the LAN, due to token waits.
This report was wildly controversial, as modeling showed that collision-based networks became unstable under loads as low as 40% of nominal capacity. Many early researchers failed to understand the subtleties of the CSMA/CD protocol and how important it was to get the details right, and were really modeling somewhat different networks (usually not as good as real Ethernet).Ethernet
Ethernet is a large, diverse family of frame-based computer networking technologies that operate at many speeds for local area networks (LANs). The name comes from the physical concept of the ether. It defines a number of wiring and signaling standards for the physical layer, through means of network access at the Media Access Control (MAC)/Data Link Layer, and a common addressing format.
Ethernet has been standardized as IEEE 802.3. The combination of the twisted pair versions of Ethernet for connecting end systems to the network, along with the fiber optic versions for site backbones, has become the most widespread wired LAN technology. It has been in use from the 1990s to the present, largely replacing competing LAN standards such as coaxial cable Ethernet, token ring, FDDI, and ARCNET. In recent years, Wi-Fi, the wireless LAN standardized by IEEE 802.11, has been used instead of Ethernet for many home and small office networks and in addition to Ethernet in larger installations.
History
Ethernet was originally developed as one of the many pioneering projects at Xerox PARC. Ethernet was invented in the period of 1973–1975.[1] Robert Metcalfe and David Boggs wrote and presented their "Draft Ethernet Overview" some time before March 1974. In March 1974, R.Z. Bachrach wrote a memo to Metcalfe and Boggs, and their management, stating that "technically or conceptually there is nothing new in your proposal" and that "analysis would show that your system would be a failure." This analysis was flawed, however, in that it ignored the "channel capture effect", though this was not understood until 1994. In 1975, Xerox filed a patent application listing Metcalfe and Boggs, plus Chuck Thacker and Butler Lampson, as inventors (U.S. Patent 4,063,220 : Multipoint data communication system with collision detection). In 1976, after the system was deployed at PARC, Metcalfe and Boggs published a paper titled Ethernet: Distributed Packet-Switching For Local Computer Networks.
The experimental Ethernet described in that paper ran at 3 Mbit/s, and had 8-bit destination and source address fields, so Ethernet addresses were not the global addresses they are today. By software convention, the 16 bits after the destination and source address fields were a packet type field, but, as the paper says, "different protocols use disjoint sets of packet types", so those were packet types within a given protocol, rather than the packet type in current Ethernet, which specifies the protocol being used.
Metcalfe left Xerox in 1979 to promote the use of personal computers and local area networks (LANs), forming 3Com. He convinced DEC, Intel, and Xerox to work together to promote Ethernet as a standard, the so-called "DIX" standard, for "Digital/Intel/Xerox"; it standardized the 10 megabits/second Ethernet, with 48-bit destination and source addresses and a global 16-bit type field. The standard was first published on September 30, 1980. It competed with two largely proprietary systems, token ring and ARCNET, but those soon found themselves buried under a tidal wave of Ethernet products. In the process, 3Com became a major company.
Twisted-pair Ethernet systems have been developed since the mid-80s, beginning with StarLAN (but becoming widely known with 10BASE-T). These systems replaced the coaxial cable on which early Ethernets were deployed with a system of hubs linked with unshielded twisted pair and later replaced the CSMA/CD scheme in favor of a switched full duplex system offering higher performance.
General description
Ethernet was originally based on the idea of computers communicating over a shared coaxial cable acting as a broadcast transmission medium. The methods used show some similarities to radio systems, although there are major differences, such as the fact that it is much easier to detect collisions in a cable broadcast system than a radio broadcast. The common cable providing the communication channel was likened to the ether and it was from this reference that the name "Ethernet" was derived.
From this early and comparatively simple concept, Ethernet evolved into the complex networking technology that today powers the vast majority of local computer networks. The coaxial cable was later replaced with point-to-point links connected together by hubs and/or switches in order to reduce installation costs, increase reliability, and enable point-to-point management and troubleshooting. StarLAN was the first step in the evolution of Ethernet from a coaxial cable bus to a hub-managed, twisted-pair network. The advent of twisted-pair wiring enabled Ethernet to become a commercial success.
Above the physical layer, Ethernet stations communicate by sending each other data packets, small blocks of data that are individually sent and delivered. As with other IEEE 802 LANs, each Ethernet station is given a single 48-bit MAC address, which is used both to specify the destination and the source of each data packet. Network interface cards (NICs) or chips
normally do not accept packets addressed to other Ethernet stations. Adapters generally come programmed with a globally unique address, but this can be overridden, either to avoid an address change when an adapter is replaced, or to use locally administered addresses.
Despite the very significant changes in Ethernet from a thick coaxial cable bus running at 10 Mbit/s to point-to-point links running at 1 Gbit/s and beyond, all generations of Ethernet (excluding very early experimental versions) share the same frame formats (and hence the same interface for higher layers), and can be readily (and in most cases, cheaply) interconnected.
Due to the ubiquity of Ethernet, the ever-decreasing cost of the hardware needed to support it, and the reduced panel space needed by twisted pair Ethernet, most manufacturers now build the functionality of an Ethernet card directly into PC motherboards, obviating the need for installation of a separate network card.
Dealing with multiple users
CSMA/CD shared medium Ethernet
Ethernet originally used a shared coaxial cable (the shared medium) winding around a building or campus to every attached machine. A scheme known as carrier sense multiple access with collision detection (CSMA/CD) governed the way the computers shared the channel. This scheme was simpler than the competing token ring or token bus technologies. When a computer wanted to send some information, it used the following algorithm:Main procedure
- Frame ready for transmission
- Is medium idle? If not, wait until it becomes ready and wait the interframe gap period (9.6 µs in 10 Mbit/s Ethernet).
- Start transmitting
- Does a collision occur? If so, go to collision detected procedure.
- Reset retransmission counters and end frame transmission
Collision detected procedure
- Continue transmission until minimum packet time is reached (jam signal) to ensure that all receivers detect the collision
- Increment retransmission counter
- Is maximum number of transmission attempts reached? If so, abort transmission.
- Calculate and wait random backoff period based on number of collisions
- Re-enter main procedure at stage 1
This can be likened to what happens at a dinner party, where all the guests talk to each other through a common medium (the air). Before speaking, each guest politely waits for the current speaker to finish. If two guests start speaking at the same time, both stop and wait for short, random periods of time (in Ethernet, this time is generally measured in microseconds). The hope is that by each choosing a random period of time, both guests will not choose the same time to try to speak again, thus avoiding another collision. Exponentially increasing back-off times (determined using the truncated binary exponential backoff algorithm) are used when there is more than one failed attempt to transmit.
Computers were connected to an Attachment Unit Interface (AUI) transceiver, which was in turn connected to the cable (later with thin Ethernet the transceiver was integrated into the network adaptor). While a simple passive wire was highly reliable for small Ethernets, it was not reliable for large extended networks, where damage to the wire in a single place, or a single bad connector, could make the whole Ethernet segment unusable. Multipoint systems are also prone to very strange failure modes when an electrical discontinuity reflects the signal in such a manner that some nodes would work properly while others work slowly because of excessive retries or not at all (see standing wave for an explanation of why); these could be much more painful to diagnose than a complete failure of the segment. Debugging such failures often involved several people crawling around wiggling connectors while others watched the displays of computers running a ping command and shouted out reports as performance changed.
Since all communications happen on the same wire, any information sent by one computer is received by all, even if that information is intended for just one destination. The network interface card interrupts the CPU only when applicable packets are received: the card ignores information not addressed to it unless it is put into "promiscuous mode". This "one speaks, all listen" property is a security weakness of shared-medium Ethernet, since a node on an Ethernet network can eavesdrop on all traffic on the wire if it so chooses. Use of a single cable also means that the bandwidth is shared, so that network traffic can slow to a crawl when, for example, the network and nodes restart after a power failure.
Monday, September 17, 2007
Building a simple computer network
Practical networks generally consist of more than two interconnected computers and generally require special devices in addition to the Network Interface Controller that each computer needs to be equipped with. Examples of some of these special devices are hubs, switches and routers.
Basic Hardware Components
A Because repeaters work with the actual physical signal, and do not attempt to interpret the data being transmitted, they operate on the Physical layer, the first layer of the OSI model.
Bridges
Bridges learn the association of ports and addresses by examining the source address of frames that it sees on various ports. Once a frame arrives through a port, its source address is stored and the bridge assumes that MAC address is associated with that port. The first time that a previously unknown destination address is seen, the bridge will forward the frame to all ports other than the one on which the frame arrived.
Bridges come in three basic types:
- Local bridges: Directly connect local area networks (LANs)
- Remote bridges: Can be used to create a wide area network (WAN) link between LANs. Remote bridges, where the connecting link is slower than the end networks, largely have been replaced by routers.
- Wireless bridges: Can be used to join LANs or connect remote stations to LANs
Switches
Switches are a marketing term that encompasses routers and bridges, as well as devices that may distribute traffic on load or by application content (e.g., a Web URL identifier). Switches may operate at one or more OSI layers, including physical, data link, network, or transport (i.e., end-to-end). A device that operates simultaneously at more than one of these layers is called a multilayer switch.
Overemphasizing the ill-defined term "switch" often leads to confusion when first trying to understand networking. Many experienced network designers and operators recommend starting with the logic of devices dealing with only one protocol level, not all of which are covered by OSI. Multilayer device selection is an advanced topic that may lead to selecting particular implementations, but multilayer switching is simply not a real-world design concept.
Routers
Routers are the networking device that forwards data packets along networks by using headers and forwarding tables to determine the best path to forward the packets. Routers work at the network layer (layer 3) of the OSI model. Routers also provide interconnectivity between like and unlike media. This is accomplished by examining the Header of a data packet.[8] They use routing protocols such as Open Shortest Path First (OSPF) to communicate with each other and configure the best route between any two hosts. A router is connected to at least two networks, commonly two LANs or WANs or a LAN and its ISP's network. Some DSL and Cable Modems have been integrated with routers for home consumers.
Types of networks
Personal area networks may be wired with computer buses such as USB and FireWire. A wireless personal area network (WPAN) can also be made possible with network technologies such as IrDA and Bluetooth.
Currently standardized LAN technologies operate at speeds up to 10 Gbit/s. IEEE has projects investigating the standardization of 100 Gbit/s, and possibly 40 Gbit/s. Inverse multiplexing is commonly used to build a faster aggregate from slower physical streams, such as bringing 4 Gbit/s aggregate stream into a computer or network element with four 1 Gbit/s interfaces.
This term is most often used to discuss the implementation of networks for a contiguous area. In the past, when layer 2 switching (i.e., bridging (networking) was cheaper than routing, campuses were good candidates for layer 2 networks, until they grew to very large size. Today, a campus may use a mixture of routing and bridging. The network elements used, called "campus switches", tend to be optimized to have many Ethernet interfaces rather than an arbitrary mixture of Ethernet and WAN interfaces.
The highest data rate commercially available, as a single bitstream, on WANs is 40 Gbit/s, principally used between large service providers. Wavelength Division Multiplexing, however, can put multiple 10 or 40 Gbyte/s streams onto the same optical fiber.
IEEE mobility efforts focus on the data link layer and make assumptions about the media. Mobile IP is a network layer technique, developed by the IETF, which is independent of the media type and can run over different media while still keeping the connection.
In modern practice, the interconnected networks use the Internet Protocol. There are at least three variants of internetwork, depending on who administers and who participates in them:
- Intranet
- Extranet
- "The" Internet
Intranets and extranets may or may not have connections to the Internet. If connected to the Internet, the intranet or extranet is normally protected from being accessed from the Internet without proper authorization. The Internet itself is not considered to be a part of the intranet or extranet, although the Internet may serve as a portal for access to portions of an extranet.
Intranet
An intranet is a set of interconnected networks, using the Internet Protocol and uses IP-based tools such as web browsers, that is under the control of a single administrative entity. That administrative entity closes the intranet to the rest of the world, and allows only specific users. Most commonly, an intranet is the internal network of a company or other enterprise.
Extranet
A extranet is network or internetwork that is limited in scope to a single organization or entity but which also has limited connections to the networks of one or more other usually, but not necessarily, trusted organizations or entities (e.g., a company's customers may be provided access to some part of its intranet thusly creating an extranet while at the same time the customers may not be considered 'trusted' from a security standpoint). Technically, an extranet may also be categorized as a CAN, MAN, WAN, or other type of network, although, by definition, an extranet cannot consist of a single LAN, because an extranet must have at least one connection with an outside network.
Internet, The
A specific internetwork, consisting of a worldwide interconnection of governmental, academic, public, and private networks based upon the Advanced Research Projects Agency Network (ARPANET) developed by ARPA of the U.S. Department of Defense – also home to the World Wide Web (WWW) and referred to as the 'Internet' with a capital 'I' to distinguish it from other generic internetworks.
obtained from address registries that control assignments. Service providers and large enterprises also exchange information on the reachability of their address ranges through the Border Gateway Protocol.
Classification of computer networks
Computer networks may be classified according to the network layer at which they operate according to some basic reference models that are considered to be standards in the industry such as the seven layer OSI reference model and the four layer Internet Protocol Suite model. In practice, the great majority of networks use the Internet Protocol (IP) as their network layer. Some networks, however, are using IP Version 6 IPv6, usually in coexistence with IPv4. IPv6 use is often experimental. it is an interconnection of a group of computers in other words.
A network as simple as two computers linked with a crossover cable has several points at which the network could fail: either network interface, and the cable. Large networks, without careful design, can have many points at which a single failure could disable the network.
When networks are critical, the general rule is that they should have no single point of failure. The broad factors that can bring down networks, according to the Software Engineering Institute [4] at Carnegie-Mellon University:
Attacks: these include software attacks by various miscreants (e.g., malicious hackers, computer criminals) as well as physical destruction of facilities.
Failures: these are in no way deliberate, but range from human error in entering commands, bugs in network element executable code, failures of electronic components, and other things that involve deliberate human action or system design.
Accidents: Ranging from spilling coffee into a network element to a natural disaster or war that destroys a data center, these are largely unpredictable events. Survivability from severe accidents will require physically diverse, redundant facilities. Among the extreme protections against both accidents and attacks are airborne command posts and communications relays[5], which either are continuously in the air, or take off on warning. In like manner, systems of communications satellites may have standby spares in space, which can be activated and brought into the constellation.
Dealing with Power Failures
One obvious form of failure is the loss of electrical power. Depending on the criticality and budget of the network, protection from power failures can range from simple filters against excessive voltage spikes, to consumer-grade Uninterruptible Power Supplies(UPS) that can protect against loss of commercial power for a few minutes, to independent generators with large battery banks. Critical installations may switch from commercial to internal power in the event of a brownout,where the voltage level is below the normal minimum level specified for the system. Systems supplied with three-phase electric power also suffer brownouts if one or more phases are absent, at reduced voltage, or incorrectly phased. Such malfunctions are particularly damaging to electric motors. Some brownouts, called voltage reductions, are made intentionally to prevent a full power outage.
Some network elements operate in a manner to protect themselves and shut down gracefully in the event of a loss of power. These might include noncritical application and network management servers, but not true network elements such as routers. UPS may provide a signal called the "Power-Good" signal. Its purpose is to tell the computer all is well with the power supply and that the computer can continue to operate normally. If the Power-Good signal is not present, the computer shuts down. The Power-Good signal prevents the computer from attempting to operate on improper voltages and damaging itself
To help standardize approaches to power failures, the Advanced Configuration and Power Interface (ACPI) specification is an open industry standard first released in December 1996 developed by HP, Intel, Microsoft, Phoenix and Toshiba that defines common interfaces for hardware recognition, motherboard and device configuration and power management.
By scale
Computer networks may be classified according to the scale: Personal Area Network (PAN), Local Area Network, Campus Area Network, Metropolitan area network (MAN), or Wide area network (WAN). As Ethernet increasingly is the standard interface to networks, these distinctions are more important to the network administrator than the end user. Network administrators may have to tune the network, based on delay that derives from distance, to achieve the desired Quality of Service (QoS).
Controller Area Networks are a special niche, as in control of a vehicle's engine, a boat's electronics, or a set of factory robots.
By connection method
Computer networks may be classified according to the hardware technology that is used to connect the individual devices in the network such as Ethernet, Wireless LAN, HomePNA, or Power line communication.
By functional relationship
Computer networks may be classified according to the functional relationships which exist between the elements of the network, for example Active Networking, Client-server and Peer-to-peer (workgroup) architectures.
By network topology
Main article: Network Topology
Computer networks may be classified according to the network topology upon which the network is based, such as Bus network, Star network, Ring network, Mesh network, Star-bus network, Tree or Hierarchical topology network, etc.
Network Topology signifies the way in which intelligent devices in the network see their logical relations to one another. The use of the term "logical" here is significant. That is, network topology is independent of the "physical" layout of the network. Even if networked computers are physically placed in a linear arrangement, if they are connected via a hub, the network has a Star topology, rather than a Bus Topology. In this regard the visual and operational characteristics of a network are distinct.
By protocol
Computer networks may be classified according to the communications protocol that is being used on the network. See the articles on List of network protocol stacks and List of network protocols for more information
Computer network
Experts in the field of networking debate whether two computers that are connected together using some form of communications medium constitute a network. Therefore, some works state that a network requires three connected computers. One such source, "Telecommunications: Glossary of Telecommunication Terms" states that a computer network is "A network of data processing nodes that are interconnected for the purpose of data communication". The term "network" being defined in the same document as "An interconnection of three or more communicating entities".[1] A computer connected to a non-computing device (e.g., networked to a printer via an Ethernet link) may also represent a computer network, although this article does not address this configuration.
This article uses the definition which requires two or more computers to be connected together to form a network. [2] The same basic functions are generally present in this case as with larger numbers of connected computers. In order for a network to function, it must meet three basic requirements, it must provide connections, communications and services. Connections refers to the hardware, communications is the way in which the devices talk to each other, and services are the things which are shared with the rest of the network. 5. Power supply - computer power supply typically is designed to convert 110-240 V AC power from the mains, to several low-voltage DC power outputs for the internal components of the computer. 7. Surge protector - an appliance designed to protect electrical devices from voltage spikes. Surge protectors attempt to regulate the voltage supplied to an electric device by either blocking or shorting to ground voltage above a safe threshold.[3]