|By Akash Singh||
|May 8, 2012 08:50 PM EDT||
Due to such features as low cost, ease of deployment, increased coverage, and enhanced capacity, multihop wireless networks such as ad hoc networks, mesh networks, and sensor networks that form the network in a selforganized manner without relying on fixed infrastructure is touted as the new frontier of wireless networking. Providing efficient quality of service (QoS) support is essential for such networks, as they need to deliver real-time services like video, audio, and voice over IP besides the traditional data service. Various solutions have been proposed to provide soft QoS over multihop wireless networks from different layers in the network protocol stack. However, the layered concept was primarily created for wired networks, and multihop wireless networks oppose strict layered design because of their dynamic nature, infrastructureless architecture, and timevarying unstable links and topology. The concept of crosslayer design is based on architecture where different layers can exchange information in order to improve the overall network performance. Promising results achieved by cross-layer optimizations initiated significant research activity in this area. This paper aims to review the present study on the cross-layer paradigm for QoS support in multihop wireless networks. Several examples of evolutionary and revolutionary crosslayer approaches are presented in detail. Realizing the new trends for wireless networking, such as cooperative communication and networking, opportunistic transmission, real system cross-layer design for QoS support over multihop wireless networks are also discussed in the paper.
KEYWORDS | Cross-layer design; multihop wireless networks; quality of service (QoS) support
As various wireless networks evolve into the next generation to provide better services, a key technology, multihop wireless network, has emerged recently. A multihop wireless network is dynamically self-organized and self-configured, with the nodes in the network automatically establishing and maintaining multihop connectivity among themselves. This feature brings many advantages to multihop networks such as low up-front cost, easy network maintenance, robustness, and reliable service coverage. There are several types of multihop wireless networks designed for different types of application scenarios. In a wireless ad hoc network, every node has the responsibility to act as a router and forward packets for each other . Because nodes normally have limited transmission ranges, multihop delivery is necessary for communication among nodes outside the transmission range. The topology of an ad hoc network is in general dynamic because the connectivity among the nodes may vary with time due to the node mobility, node departures, and new node arrivals. Wireless mesh networks are composed of two types of nodes: mesh routers and mesh clients . Other than the routing capability for gateway/bridge functions as in a conventional router, a mesh router contains additional routing functions to support mesh networking. Through multihop communications, the same coverage can be achieved by mesh routers with much lower transmission power. Sensor network is currently a very active area of research  composed of a large number of sensor nodes that are densely deployed. In general, sensor nodes are limited in battery life, computational capacities, and memory size. The sensor nodes are usually scattered in a sensor field. Each of these scattered sensor nodes has the capability to collect data and route data back to the sink through a multihop delivery path.
G. Joint Power Control, Scheduling, and Routing
It has recently become evident that a traditional layering network approach, separating routing, scheduling, and power control, is not efficient for providing QoS support for ad hoc networks . More and more people realize that especially in multihop networks, there is strong coupling among the traditional network, MAC, and physical layers. In the past several years, the problem of coupling routing with access control in ad-hoc networks has been addressed , . Moreover, a joint scheduling and power control algorithm is studied in . Having the observation that a change in power allocation or schedules on one link can induce changes in capacities of all links in the surrounding area and changes in the performance of flows that do not pass over the modified link, the joint design among power control, scheduling, and routing is essential. In , Li et al. assume a time-division multiple-access based ad hoc network, where all nodes share the bandwidth by occupying different time slots. For the scheduling part, links are assigned slots depending on their link metrics. The algorithm gives priority to the links that have larger queue length and block less traffic from neighboring links. The authors conclude that with joint power control and scheduling, the network achieves significantly larger throughput and less delay. But for
some unbalanced topology, bandwidth requirements cannot be satisfied by scheduling only; rerouting is needed to lead some packets to go through alternative route and release congestion. Routes are then selected periodically according to both the energy consumption and the traffic accumulation. It can be seen that the rerouting decision is made iteratively with joint power control and scheduling. A similar idea can be found in , where the authors seek to find subsets of simultaneously active links as well as the associated transmission powers in order to minimize the total average transmission power in the network. A duality approach for finding the optimal scheduling and power control policy is proposed. In this paper, the authors also consider the problem of routing, and hence determination of the required data rates on each link, for a given traffic demand rate matrix. In , Radunovic et al. want to find scheduling, power allocation, and routing that achieves the max-min fair rate allocation. This is a highly complex nonconvex optimization problem for a general network topology. In order to obtain results for larger networks, they focus on onedimensional symmetric network topologies, where all nodes are aligned on a straight line. These topologies represent a large class of existing networks, from car networks on highway to networks on coast or mountain valley. The authors found that for small power constraints it is better to relay, and for large power constraint it is better to send data directly to destinations. They characterized optimal scheduling and power allocation for two different types of routing policies, i.e., direct and minimum energy routing policies, respectively.
A. Interaction Among Multiple Layers and the Cross-Layer Design Framework
In the initial stage, multihop wireless network protocol design is largely based on a layered approach, where each layer in the protocol stack is designed and operated independently, with interfaces between layers that are rather static. This paradigm has greatly simplified network design and led to the robust scalable protocols in the Internet. However, the inflexibility and suboptimality of this paradigm result in poor performance for multihop wireless networks in general, especially when the application has high bandwidth needs and/or stringent delay constraints. To meet these QoS requirements, recent study on multihop networks has demonstrated that cross-layer design can significantly improve the system performance -. Realizing cross-layer design is important for improving system performance for ad hoc networks. The National Science Foundation and Office of Naval Research jointly held a workshop on BCross-Layer Design in Adaptive Ad Hoc Networks[  in 2001. A working group of the Internet Engineering Task Force has been studying the interlayer interactions and performance in mobile ad hoc networks. They summarized the interlayer interaction metrics and the benefits of such information exchange between the lower layers, network layer, and transport layer . For example, the signal-to-noise ratio from the physical layer and the interference level from the link layer can be used for the route selection at network layer and transmission control protocol window size adjustment at the transport layer. Cross-layer design breaks away from traditional network design where each layer of the protocol stack operates independently. A cross-layer approach seeks to enhance the performance of a system by jointly designing multiple protocol layers. The resulting flexibility helps to provide better QoS support given network dynamics and limited resources. It is known that different system parameters are controlled in distinct layers in a wireless network (see Fig. 1). For example, power control and modulation adaptation in the physical layer will change the overall system topology. Scheduling and channel management in the MAC layer will affect the space and time reuse in a network. Routing and admission control in the network layer will change the flow distribution. Finally, congestion and rate control in the transport layer will change the traffic volume in each communication link. All those controls potentially have mutual impact. Careful attention must thus be paid when applying controls in different layers. For instance (_1 in Fig. 1), assignment of channels to certain network interfaces changes the interference between neighboring transmissions. Moreover, it also defines the network topology that in turn influences routing. Another example can be that the power control in the physical layer changes the link status and the topology of the network, which in return affect the scheduling result in the MAC layer. On the other hand, the scheduling decides the link activation and the interference generated, and therefore changes the power required at each link to achieve certain QoS requirement (_2 in Fig. 1). It is necessary to consider that all the controls cross different layers jointly to optimize the overall performance. Supporting soft QoS over multihop wireless networks can benefit substantially from the cross-layer design. In this design, interdependencies between layers are characterized and exploited by adapting to information exchanged
between layers and building the appropriate amount of robustness into each layer. For example, routing protocols can avoid links experiencing deep fades, or the transport layer can adapt its transmission rate based on the underlying network condition. Fig. 1 illustrates the crosslayer framework and the potential interaction among layers. Several potential interactions among multiple layers are listed from _0 to _4 . In the rest of this section, examples of evolutionary and revolutionary cross-layer approaches from different aspects are reviewed in detail.
B. Cross-Layer Network Capacity Planning
One of the main goals in the design of wireless multihop networks is capacity planning. Network capacity planning is concerned with the cost-effective deployment of a communication infrastructure to provide adequate coverage, throughput, and QoS support for end users. Within this realm, the QoS requirements will be represented as a set of end-to-end demands. Multiple network capacity planning schemes have been proposed for different design goals for which a network can be optimized, e.g.,maximizing system throughput, minimizing end-to-end delays, or minimizing the total energy consumption. In , Wu et al. address the network planning problem as allocating the physical and the MAC layer resources or supplies to minimize a cost function while fulfilling certain the transport layer communication demands. They model the demands in a network as a collection of multicast sessions, while modeling the allocation of supplies as a timesharing within a collection of possible physical layer states. This formulation necessitates an interaction across the network protocol stack, with which the key is to find an appropriate abstraction of each layer. The physical layer can be abstracted as a set of elementary capacity graphs (ECGs). An ECG is a capacity graph that represents a physical layer state, corresponding to an arrangement of concurrently active links among neighbors. At the MAC layer, by time sharing among different physical states, convex combinations of the ECGs can be achieved, hence presenting to the upper layers a set of supported composite capacity graphs. The network layer transforms the end-to-end traffic demand into a link-bylink one compatible with a supported capacity graph. Integrating these components, an iterative cross-layer optimization is proposed. Two objectives, minimizing an aggregate congestion measure and minimizing power consumption, are considered in that paper. To tackle the capacity planning issue in fixed multiradio multichannel multihop wireless networks, Kodialam et al. develop algorithms to jointly optimize routing, channel assignments, and scheduling in order to obtain upper and lower bounds for the capacity region under a given objective function, i.e., QoS requirement . They develop a network model with a limited number of orthogonal channels and with multiple radios at each node. This model provides both necessary and sufficient conditions for a feasible channel assignment and schedule in the network. Both the upper bound and lower bound of the system capacity are given in the paper.
Research work in the field has mainly focused on the routing function for mobile ad hoc networks for dynamic and dense network configurations, assuming that mobile routers connected by wireless links move randomly and organize themeIves arbitrarily, therefore, the network topology may vary rapidly and unpredictably. This work has been mainly performed in the framework of the IETF Mobile Ad hoc Networks (MANET) working group. The assumptions of the MANET routing protocols have their origins to the requirements imposed by military applications, being the main focus of DARPA on the early eighties. However, more practical applications nowadays seek for multihop wireless networks use. In this type of applications, the number of hops is limited and mobility is rather low. In addition, even though MANET work has produced results, commercial deployments are still at their infancy. This is attributed to the fact that the MANET routing protocols are rather complex or over-killing for practical applications found in enterprise, ofice or home domains. Currently, four well-known and wehesearched routing protocols are being defined and standardized in IETF MANET, nameIy the Dynamic Source Routing (DSR) , , [SI, Ad hoc on Demand Distance Vector (AODV) [43, , Optimized Link State Routing (OLSR)[ 7],[S I, and Topology Dissemination Based on Reverse-Path Forwarding (TBRPF) . Recently, efforts are undertaken, so as to adapt the Open Shortest Path First (OSPF) [IO] protocol for wireless links. Work in [I 13 proposes OSPF version 3 as a candidate routing protocol for Manet, while work in [I21 suggests improvements to OSPF, in order to be used in multihop wireless networks. Furthermore, other routhg protocols for MANETs have been proposed by the academia. The majority of MANET routing protocols select routes based on the shortest-path metric. Because the overall network throughput is highly depending on the route selection, additional criteria should be also used. Efforts towards this direction have been also made to add criteria based on a combination of shortest-path, link quality , or least congested paths, that is, network load [ 141. Link quality should be considered when choosing routes because wireless links (even between stationary nodes) may exhibit a wide range of delivery rates and asymmetry . Link quality estimations can be based on either the signaf-tonoise ratio or the expected transmission count (ETX) metric [ 143. Load-aware routing is also discussed in [ 151. Although multihop routing is mainly realized with Layer 3 routing protocols, for small-scale networks Layer 2 bridging could be used, as well. Recently, the IEEE 8 0 2 . 1w~o rking group has proposed a new bridging protocol standard that could meet the multihop routing requirements. As opposed to Layer 3 routing (MANET work), IEEE 8 0 2 . 1 ~de fines the Rapid Spanning Tree Protocol (RSTP) as a remedy to discover rapid topology changes at Layer 2.
3. Appropriate coupling ofprotocol layers
Typical protocol stacks adhere to the layering approach of interaction only between adjacent layers. However, multihop wireless networks require more cooperation between the various layers of the protocol stack, although strictly speaking, this leads to protocol violation, because of variations in channel conditions, network topology changes, and energy constraints. Actually, the wireless protocol stack in a multihop ad hoc network considers the layering of TCPfiP, routing, and IEEE 802.11 MAC protocols without any close coupling between these. That is, the network bctions on each of these layers have been designed and operate independently of one another, resulting in poor network performance, as identified also in , . Tuning parameters of these "de facto" standards for ad hoc networks is required to further improve perLonname. Lately, the "cross-layer protocol design" concept for wireless ad hoc networks was proposed in 1181, which mainly features adaptation and optimization across all layers of the protocol stack. That is, protocols at different layers of the stack (i.e., application, network, MAC, link) adapt their behavior according to the observed channel conditions, alternations of network topology, and application requirements. Information related to a protocol's adaptation is exported to other layer protocols, so as for the latter to decide about their further actions (i.e., to adapt or not). Similar approaches have been proposed in  and [20J. Particularly, work in [I91 focuses on link layer adaptation according to channel conditions and application requirements for single-hop wireless networks.
Multihop networks may be realized with heterogeneous wireless interfaces. The different features of ejristing wireless network interfaces with respect to the physical layer, MAC protocol, error control, connection management, etc, should be considered, as well. The corresponding wireless network driver(s) and link layer protocol(s) should be accessed fiom the upper layer protocols and applications for control purposes in a uniform way similar to the way the upper layer protocols and applications access the underlying protocol stack through the standard socket interface for data purposes. Thus, to cope with the various existing and emerging radio technologies (e.g., IEEE 802.11 b/a/g, Bluetooth) and develop wireless protocol stacks that are radio-agnostic, a common wireless interface (i.e., Application Programming Interface (NI)is) needed to hide the particularities of each technology. The wireless AFI should provide a common service interface for each underlying link technology, while exporting link-related information. Such ‘a common interface should support the necessary service primitives for configuration of wireless drivers and link layer protocols, retrieval of statistics, and event handling, in order to facilitate the development of wireless-aware applications and routingimobility management protocols, Moreover, in a heterogeneous multihop wireless environment, different wireless links may be present, whiIe these links may even operate in the same fiequency band. For example, Bluetooth and Wi-Fi technologies operate in the 2.4 GHz license-exempt band. In turn, this leads to interference whenever those technologies operate concurrently. Mitigation techniques should be enforced in the network to alleviate the interference problem.
D. Link layer robustness
Wireless links in a multihop network may exhibit packet loss rate, high latency, and low throughput. Although it is supposed that, these are compensated by the MAC andor link layer protocols of each wireless interface, existing wireless technologies support different mechanisms for channel coding, medium access, and error control. If these bearer services are poorly implemented or implemented without considering IF trafic requirements, then, whenever IP traffic is layered over these wireless technologies, the obvious result is performance degradation (throughput decrease andor latency increase). Thus, complementary machinery is needed to compensate for wireless link impairments. The corresponding network functions complement and augment the services of the underlying wireless link, thus, yielding an increased protocol performance. Link quality can be substantially improved with fink layer protection, such as error control (e.g., Forward Error Correction (F'EC) [Zl]), and spectral efficiency (e.g., Robust Header Compression (ROHC) for TCP , /23]). Link layer protection is typically applied over the local Imk, i.e., the one between two adjacent nodes.
D. Signaling Protocol
EWCCP uses explicit signaling to coordinate with neighbor nodes. It periodically broadcasts the average queue information. The broadcasting is limited only within the one-hop neighbors. The broadcast packet contains the average queue size in the last round of all links of a node. It includes not only the outgoing links but also the incoming links of the node. Note that the queue information of a link is usually available only at the link sender, and we make it also available at the link receiver by using the piggyback field in the link congestion header, as shown in Fig. 2. When a data packet is transmitted, the queue information of the link is also delivered to the link receiver. Then, the receiver can propagate it out with broadcasting. Each node maintains a neighbor queue table. One example is shown in Fig. 3. A link is uniquely identified by the IDs of the sender node and receiver node. Each entry has the following three types: 1) O (own), presenting a queue on a downstream link; 2) P (piggyback), presenting a queue that is piggybacked from an upstream link; and 3) B (broadcast), presenting the other information learned from receiving broadcast packets. The sequence number is used to judge the freshness of the entry. A node can only increase the sequence number of the entries that marks O. There is a tradeoff in the interval between two successive broadcasting. Frequent broadcasting will give more timely information but also consumes more bandwidth. A signaling interval with a magnitude of hundreds milliseconds should be an acceptable overhead. We evaluate this with a simple calculation. Assume that one node may have an average of ten
neighbors in one hop and that the signaling interval is 100 ms. Each entry in the queue table may cost 6 bytes (i.e., 2 bytes link id, 2 bytes queue size, 1 byte sequence number, and 1 byte flag). The total bandwidth for signaling is (10 + 1) ∗ (2 ∗ 8 ∗ 10 ∗ 6)/100 = 105.6 kb/s, which is about 1% of the 11-Mb/s
channel of 802.11b without counting the MAC overhead. Note that multiplier 2 is applied in preceding calculation because wireless links may be bidirectional; therefore, both incoming and outgoing queues are broadcasted. The signaling protocol is summarized as follows.
1) When a node sends a data packet along a downstream link, it fills the average queue size of that link in the piggyback field in the link congestion header.
2) When a signaling interval is passed, a broadcast packet is generated and sent. The broadcast packet contains all the entries in the table that marks O and P.
3) When receiving a broadcast or a data packet with link congestion header, a node updates the neighbor queue table with the information contained. If an entry already exists, the one with the larger sequence number wins.
3.3 Packet Loss Handling
In order to better understand the concepts explained above, we show here a typical response of our mechanism when reacting to lost packets. We include the response of the LDA scheme to highlight the difference between our proposal and LDA. Fig. 4 exhibits a part of a simulation run in which both strategies faced a lost packet in a chain topology of five hops. Let packetn be the data packet of sequence number ðnÞ. Fig. 4a shows that the sender transmits four packets (320- 323) at time 12.6 seconds. In this run, packet322 and packet323 are dropped. The receiver times out and acknowledges only two packets (320, 321) instead of four. The receiver also updates its dwin size to two. Upon receipt of the ACK for packet321, the sender sends two new packets (324, 325) because two packets were acknowledged. At this moment, there are only two packets in flight (324, 325). Since packet324 and packet325 are detected by the receiver as out-of-order packets, they trigger immediate acknowledgments at the receiver (first and second duplicate ACKs for packet321Þ. By receiving the first duplicate ACK, the sender transmits a new packet (326) which will also be out-of-order. When the sender receives the second duplicate ACK at instant 12.9 seconds, it retransmits the first lost packet (322) and halves its cwnd size to two packets (fast retransmit/fast recovery). The cwnd will be expanded gradually after the sender exits the fast recovery phase. When the sender receives the third duplicate ACK, at time 12.96 seconds, it does nothing because it is in the fast recovery phase. At instant 12.97 seconds, the sender gets the acknowledgment for packet322, allowing it to retransmit the missing packet323, and then exits the fast recovery procedure. Packet323 fills in the gap at the receiver's buffer, which triggers the ACK of
packet326 due to the cumulative property of the TCP acknowledgment strategy. At instant 13.01 seconds, the sender receives the acknowledgment for packet326, and so transmits two new packets (327, 328). These two packets cause the receiver to send one ACK only as its dwin is set to two packets at this point. After that, dwin increases and, as a consequence, the number of delayed ACKs increases toward 4. Fig. 4a shows two spurious retransmissions caused by timeout at the receiver. Packet334 and packet338 are unnecessarily acknowledged at the instants 13.62 s and 13.79 s, respectively. This means that the timeout interval computation may still be improved. Notice that the problem here is not the same as the one addressed in , , where the spurious retransmissions take place at the sender. Fig. 4b shows the response of LDA to a packet loss. In this simulation run, packet241 is lost at about 10.55 seconds. Differently from our technique, in which the number of packets in flight is limited to four packets, the proposed LDA works with a large limit for the cwnd (10 packets), so it has more packets in flight than TCP-DAA. One can notice in Fig. 4b that, although only one packet has been dropped, various acknowledgments triggered the transmission of less than the optimal four packets at the sender. This shows that the retransmission timer expired in several situations unnecessarily. Additionally, the sender waits for the default three duplicate ACKs for retransmitting the dropped packet, and so it takes a longer time to take action. In short, by comparing Fig. 4a with Fig. 4b, one can clearly see that TCP-DAA provides more stability regarding the number of delayed ACKs. As a result, less packet delay variation is perceived by the sender, which, in turn, tends to minimize the inaccuracy in the timeout interval computation at the sender.
A. Tuning of the Back-off Parameters
In IEEE 802.11 DCF, the back-off parameters such as CWmin and CWmax are fixed, which is insufficient to guarantee a satisfactory performance under various network scenarios. To analyze the impact of the back-off parameters on network performance, Bianchi derived a two dimensional Markov chain model for the exponential backoff process . Using this model, it was shown that the number of stations and the minimum CW size have significant impacts on the overall performance of IEEE 802.11 DCF. Bianchi and Tinnirello  proposed how to estimate the number of active stations using an extended Kalman filter in a WLAN. They showed that tuning of the MAC parameters can effectively improve the network performance when the number of active stations is properly estimated. Accordingly, extensive studies on improving network capacity by adapting back-off parameters have been carried out , , . Cali et al.  proposed a distributed algorithm called IEEE 802.11+, which enables each node to estimate the number of contending nodes at any given time. They also derived an analytical model which gives a theoretical maximum bound on the network capacity, and tried to find the optimal CW value to achieve the theoretical throughput limit. Kwon et al.  proposed a fast collision recovery (FCR) protocol, which is a contention-based protocol that redistributes the back-off timer among all competing stations with an objective of reducing the idle back-off time. Most existing CW tuning schemes assume a one-hop network
topology such as an infrastructure WLAN and primarily consider how to adjust the contention window size of each node to maximize the number of concurrent transmissions without incurring severe collisions among the concurrent transmissions. To the contrary, we consider a multi-hop network topology, where intra-flow interference more significantly affects the end-to-end throughput performance. Our proposed CW adaptation scheme attempts to reduce unnecessary packet drops due to inter-flow interference and to improve the end-to-end throughput performance by differentiating the contention window sizes of relay nodes belonging to a same multi-hop path.
B. Transmit Power Control
The issue of power control has been extensively studied in the context of topology maintenance, where the objective is to preserve a graph-theoretic network connectivity, to reduce power consumption, and mitigate MAC-level interference - , . Power control for the purpose of increasing spatial reuse and network capacity has been treated in the PCMA protocol , the PCDC protocol , and the POWMAC protocol . In , Monks et al. proposed PCMA, in which the receiver announces its interference margin that it can tolerate on an out-of-band channel and the transmitter selects its transmit power that does not affect any ongoing transmissions. Muqattash and Krunz also proposed PCDC and POWMAC in , , respectively.
C. Carrier Sense Threshold Adjustment
The carrier sense threshold is also a key parameter for determining the level of spatial reuse. The impact of the carrier sense threshold on the network capacity has been studied in -. Zhu et al.  determined an optimal carrier sense threshold value which maximizes spatial reuse for several regular topologies. Based on the SINR required to sustain a predetermined transmission rate, Zhu et al. proposed in  a dynamic algorithm that adjusts the carrier sense threshold in order to set the SINR of each transmission to a given level. Vasan et al.  proposed an algorithm, called echos, to dynamically adjust the carrier sense threshold in order to allow more flows to co-exist in 802.11-based hotspot wireless networks. Yang and Vaidya  considered several factors such as MAC overhead, transmission rate, and network density in selecting optimal carrier sense threshold that maximizes the aggregate throughput.
IV. THE ENHANCED SYN-MAC PROTOCOL AND SIMULATION
The SYN-MAC protocol was originally designed for a local area network within a single collision domain. We have observed inefficiency when it is applied in multihop networks. In this section, we first propose an enhanced version of SYN-MAC to optimize the network throughput and channel efficiency; then a performance comparison between the SYNMAC protocol and the IEEE 802.11 is provided via simulations. 2 4 6 8 10 12 14 0 0.05 0.1 Flow ID Transmission Probability of Each Flow model sim Fig. 6. Transmission probability in arbitrary network.
A. The Enhanced SYN-MAC Protocol
As described in the earlier sections, a station gives up channel access once it receives a CS with other station's ID. For example, in Fig. 2, Station 12 gives up its transmission after it receives the CS from Station 9. This is, however, not necessary, because two transmissions 12 → 11 and 9 → 13 can happen simultaneously. In IEEE 802.11, such simultaneous transmission is disabled by using RTS/CTS scheme, because the ACK of one transmission (e.g., 11 → 12) and the data of another (e.g., 9 → 13) may result in collision (e.g., at Station 12 when it is receiving ACK). These problems, however, do not exist in SYN-MAC, because the transmissions of data and ACKs are all synchronized. As a result, the two transmissions can be accommodated concurrently. To address the above problems, we propose to make a few modifications to SYN-MAC without incurring any extra overhead, in order to achieve better channel efficiency and system throughput. Some rules in the contention interval are modified as follows: 1) for any intended sender in the network, namely As, if As hears a valid CS with the ID of another station who is recognized as its neighbor, As gives up for this frame as previously stated; however, 2) if the ID in the CS is not in D(As), As does not give up channel competition and keeps transmitting its CSs unless either it hears the ID of one of its neighbors in another CS, or at the end of the contention interval. Table (II) summarizes the new activities in the contention interval. allowed to transmit in the same frame. For example, with the original rules, Station 2 had to give up transmission at the second slot in the contention interval. Now it learns from the CS that the receiver is not within its collision domain, and therefore Station 2 keeps channel competition at the third slot. Stations 8 and 12 do not give up either, due to the same reason. In the hidden-station elimination interval, Stations 8 will not receive a valid HCM because Station 10 has a bigger contention number. However, Stations 2 and 12 can receive HCMs from their intended receivers. Thus five transmissions (2 → 4, 3 → 5, 9 → 13, 10 → 7 and 12 → 11) can happen simultaneously by using the enhanced approach, in contrast to only three transmissions shown in Fig. 2.
7.2 Random Topologies
In the next set of experiments, we evaluate the performance of the OPDMAC protocol in a large multihop network. We compare OPDMAC with Basic DMAC protocol, DMAC protocol with omni-directional backoff (DMAC-OM-BO), CDR MAC protocol , and the IEEE 802.11 standard. In a random network, the challenges are more complex but the additional transmission opportunities can provide more spatial reuse gain. We simulated a network with 30 nodes randomly placed in an area of 1; 000 m _ 1;000 m. The results are averaged over 10 different simulation runs. We evaluate the performance for both one-hop flows and multihop flows.
7.2.1 One-Hop Flows
In each simulation run, 10 out of the 30 nodes are randomly chosen as sources. Each source generates Constant Bit Rate (CBR) traffic and the destination of each packet is chosen randomly from the set of the node's one-hop neighbors. We consider the aggregate throughput, the average delay, and the packet delivery ratio as our performance metrics. Fig. 9 shows the aggregate throughput as the total offered load increases. We can see that OPDMAC outperforms all other protocols due to its ability in exploiting the offered spatial reuse. The results also show that DMAC-OM-BO outperforms DMAC since it alleviates deafness chains and deadlocks. We can also see that CDR-MAC achieves the least throughput as a result of the large control overhead associated with the protocol that significantly offsets the benefits of spatial reuse. Fig. 10 illustrates the average delay in the same network. The figure shows that the average delay of OPDMAC is in terms of milliseconds even at very high loads. In contrast, other protocols experience much higher average delay in terms of seconds at high loads. Although the delay experienced by the IEEE 802.11 is necessary to resolve contention, the other directional MAC protocols fail to fully exploit the benefits of beamforming antennas. The significant improvement achieved by OPDMAC is mainly because it is very effective in exploiting transmission opportunity offered in this case when multiple flows at each node are ready for transmission in different directions. The proposed scheme prevents the node from undergoing unnecessary idle wait time and minimizes the queuing delay by transmitting a packet in one direction during the backoff period needed before transmitting another packet in another direction. On the other end, CDR-MAC suffers from a very high delay due to the time consumed in transmitting several RTS packets before each data transmission. In Fig. 11, we plot the packet delivery ratio (PDR) versus the total offered load. We can see that CDR-MAC performs the best since it is a conservative protocol that aims to perform collision and deafness avoidance. DMAC-OM-BO performs similar to the IEEE 802.11 because of the prolonged omni-directional backoff periods. At low loads, DMAC starts to suffer from a low PDR due to the successive failures resulting from deafness while OPDMAC has a higher PDR since it minimizes the correlation between successive retransmission attempts. At high loads, the PDR for OPDMAC decreases dramatically. This is mainly due to the IEEE 802.11 default parameters and timing used with OPDMAC. As discussed in Section 6, those values are not suitable for the case of beamforming antennas. In contrary to the conservative CDR-MAC, OPDMAC is an aggressive protocol that aims to exploit the spatial reusability of the wireless channel. Hence, its implementation parameters should be carefully chosen to deal with the possibility of transmission failures especially at high traffic loads. In the next experiment, we evaluate the performance of OPDMAC by changing the default values for DIFS, CTS-timeout, and the backoff CW to those discussed in Section 6. The new and the standard values are shown in Table 2. We plot the aggregate throughput, the average delay, and the packet delivery ratio in Figs. 12, 13, and 14, respectively. As we can see, the aggregate throughput for OPDMAC obtained with the new values is almost identical to that obtained using the standard ones. However, the new parameters improve the packet delivery ratio. At high offered loads, the PDR increases from 86.9 percent to 90.1 percent. This is mainly due to the reduction in the number of hidden terminals when a longer DIFS is performed before transmission. As shown in Fig. 13, the gain in the PDR comes at the expense of a slight increase in delay which is due to using a higher value for DIFS.
However, the average delay is still far below that achieved by the other protocols. In the previous experiments, we used an LP window similar to the contention window of the IEEE 802.11b which is equal to [0, 31]. In this experiment, we evaluate the performance of OPDMAC when the LP window is changed while keeping the new values for DIFS, CTS-timeout, and the backoff CW shown in Table 2. Figs. 15, 16, and 17 show the aggregate throughput, the average delay, and the packet delivery ratio, respectively. By increasing the LP window, the transmission failures due to deafness decrease since the nodes are likely to spend more time in an omni-directional mode listening for the medium. Fig. 17 depicts the benefits of increasing the LP window on the packet delivery ratio. However, the throughput curves shown in Fig. 15 shows that the largest LP window [0, 127] results in a decrease in the aggregate throughput by 3 percent. This is mainly due to the increase in the idle waiting time accompanied by the large LP that could decrease the spatial reuse gain. With respect to the average delay, Fig. 16 shows that the moderate LP window [16, 63] achieves a delay equal to the delay achieved by a smaller LP window at very high loads. This shows that the LP window of [16, 63] achieves a trade-off between the probability of deafness and the unnecessary idle waiting time in the considered scenarios.
2. OVERVIEW OF CROSS-LAYER INTERACTION
2.1 Signal Interference and 802.11 MAC
In a wireless medium, the signal is transmitted to all directions, and it may suffer from interference due to other node's transmission. In a multihop topology, IEEE 802.11 MAC cannot completely prevent the signal interference problem, a.k.a., the extended hidden terminal problem , , . The classic hidden terminal problem as shown in Fig. 1 (a), occurs when the transmission range of each node covers just a singlehop distance, thus making nodes A and C "hidden" from each other. However, as node B can see both of them, node B can coordinate the transmission of hidden nodes A and C using RTS/CTS to avoid concurrent transmission. The extended hidden terminal problem as shown in Fig. 1 (b), occurs when there are some other nodes beyond node B's transmission range. Since node D is beyond node B's control, node D can interfere with the RTS/CTS exchange of nodes A, B and C. In Fig. 1 (b), the interference range spans two hops, where the signal is not strong enough to deliver meaningful data yet strong enough to corrupt data in other's transmission range. Consequently, node D keeps interfering the RTS reception of node B when it is busy (e.g., sending data to node E or C). As node B does not respond with CTS, node A keeps re transmitting RTS until it reaches the maximum count (7, in IEEE 802.11) and eventually drops the packet. In short, nodes A and D become new hidden terminals to each other. Fig. 2. The multi-loop cross-layer interaction model. The extended hidden/exposed terminal problem reduces the bandwidth-delay product of multihop IEEE 802.11 networks significantly , . The bandwidth delay product represents the total number of unacknowledged in-flight packets between the TCP sender and receiver, including all packets in the queue and under transmission in the link. The wireless link loss caused by signal interference starts before the queue overflow, and it triggers premature TCP reaction in reducing its sending rate. Because of little queue buildup, the bandwidth delay product of multihop IEEE 802.11 networks is mostly determined by the number of packets transmitted at the wireless link. As shown in Fig. 1 (b), one packet at every four hops survives from the interference so that the bandwidth-delay product of the network is about one quarter of the end-to-end hop counts , .
2.2 Multi-loop Cross-layer Interaction Model
On-demand routing protocols such as DSR and AODV establish a path on demand and monitor the path throughout the session. If the path is believed to have failed (typically due to the wireless link loss), the protocol performs a maintenance/re-discovery operation on the path. On the other hand, in multihop 802.11networks, congestion produces the link loss rather than the queuing loss due to the extended hidden/exposed terminal problem. This interferes with the route maintenance task of the routing protocol. The whole process can be explained by a multi-loop protocol interaction model as shown in Fig. 2. First, the TCP window mechanism overloads the network and intensifies the MAC contention (in steps A and B). The packet loss at the link layer is perceived as a routing failure by the on-demand routing protocol (in step C). Being confused with the routing failure, the routing agent enters a recovery process by sending error messages, updating and reestablishing the routing table, and salvaging some lost packets (in step D). The recovery process creates network traffic at the critical point of congestion and link failure (in step E). While the network overload is not resolved, the MAC contention loss occurs again (in step B). Meanwhile, due to the routing failure, the TCP connection is interrupted (in step F) and then times out (in step G). Since there is no external packet entering the network during this period, the network overload is reduced and the routing and MAC functions are recovered. However, after timeout, TCP restarts (in step H) and overloads the network again (in step A). Basically, Fig. 2 represents the lack of a coordination in sharing resources between transport and on demand routing protocols. TCP, by nature, tries to take the available network bandwidth as much as possible. Due to the contention-based 802.11 MAC, TCP's greedy behavior leaves little that might otherwise be used for the other critical functions in different layers (e.g., routing maintenance here). MAC and on-demand routing protocols suffer from the lack of network resources at the critical moment when they are most needed. TCP's greedy behavior leads to severe instability of the whole network, which hurts TCP performance in return. In Fig. 2, it is only between steps H and A (i.e., before network overloading) that TCP is given a quality end-to-end connection. The inner loop (formed by steps B, C, D, and E) is self-sustaining in the sense that the routing recovery attempt induced by the link failure results in another link failure. For a network with a long end-to-end hop-distance, there could be multiple points of link failure, and the full path may not be easily recovered until the network load goes well below its capacity. While the inner loop repeats, the network remains unstable unless a proper action is taken elsewhere (e.g., end hosts) to reduce the network load.
6.2 Identification of Incipient Congestion
Due to its end-to-end semantics, TCP's congestion control algorithm is based on the measurement of round trip times and packet losses. In fact, in current TCP variants such as Reno and NewReno, the actual identification of congestion is solely laid upon the observation of packet losses. Therefore, TCP increases the load issued into the network until a packet loss is detected, where such packet loss identifies congestion. Note that although TCP Vegas uses RTT measurements in addition to packet losses to identify congestion, it still suffers from bursty packet transmissions in wireless multihop networks. Considering the characteristics of IEEE 802.11 multihop networks, it becomes obvious that a transport protocol which actually provokes packet drops to get network feedback has to suffer from poor performance. Thus, our goal is to develop a congestion control algorithm that identifies high contention on the network path of the TCP connection and proactively throttles the transmission rate before losses occur. In order to retain the end-to-end semantics of TCP, such congestion control algorithm requires a measure obtainable at the TCP entities, which quantifies the degree of contention on the network path. We propose the coefficient of variation of recently measured round trip times, covRTT, as key measure for the degree of the contention on the network path.