Wednesday, October 29, 2008

Active Network Vision and Reality: Lessons from a Capsule-Based System

coming soon

Resilient Overlay Networks

This paper presents the concept of a Resilient Overlay Network (RON) that provides reliable packet delivery by reducing detection and recovery time following network failures or outages. The authors discuss the scalability advantages that come along with the hierarchical organization of the Internet and then describe some disadvantages in terms of reliable packet delivery. Inter-domain routing with BGP can damp routing updates to minimize oscillations; this distortion of link-state information inevitably leads to suboptimal fault tolerance. RON nodes are deployed in various domains and communicate with each other to provide optimal packet delivery, via either direct Internet paths or other RON nodes.

These application-layer overlay networks are a neat concept and somewhat reminiscent of middleboxes - an intermediary structure to bypass certain shortcomings of the Internet infrastructure. One important point that the authors make is that forwarding via at most one RON node provides sufficient reliability gains. Even so, I'm not sure a) how the modern Internet would react to a heavy influx (not just a small set as in this paper) of RON nodes b) whether inter-domain routing latency and distortion is still a big enough problem today to warrant the use of RONs. Interesting paper, must say.

A First-Principles Approach To Understanding the Internet's Router-level Topology

This paper presents a slightly different perspective on modeling the topology of the Internet. The authors initially point out flaws in the power-laws approach - essentially, several different network models can give rise to similar degree distributions, thereby reducing the effectiveness of power-laws. Instead, a "first-principles" approach with an emphasis on practical network engineering considerations is adopted here. One must identify which factors are fundamental to topology construction rather than simply searching for data-driven power-law analysis. In doing so, router technology and economic considerations must be taken into account.

Using probabilistic methods to generate graphs that model network behavior is a flawed approach as random rewiring between the nodes can damage the performance and efficiency of these models. Interestingly, the authors bring up the point that they have used "toy models" rather than actual network data to illustrate the new modeling approach. Even though they refute this criticism to a degree, there will remain some lack of credibility due to this facet of the research. Unlike the power-laws paper, this one appears to spend more time discussing the flaws of other approaches and not enough on concrete modeling techniques backed by network-based evaluation. Valid points are brought up but the paper feels a bit incomplete.

On Power-Law Relationships of the Internet Topology

This paper addresses the need for accurate models of the Internet topology, for simulation purposes as well as for analyzing and optimizing protocols. Topology can be studied at the intra-domain or inter-domain level using metrics such as node out-degree and inter-node distance. Power-laws have previously been used to describe network traffic but not network topology; probabilistic distance-based models have been very popular for this purpose. The authors aim to identify useful relationships between graph parameters, choosing parameters that can be captured with one number. For example, some chosen parameters are the rank exponent, the out-degree exponent and the hop-plot exponent.

One naturally wonders how such strict relationships emerge in a chaotic network of both cooperative and antagonistic forces such as the modern Internet. The authors anticipate and respond to this quandary with a discussion of the dynamic equilibrium reached in the disorder of the Internet. Several arguments are made in favor of further exploring such power-laws, such as the ubiquity of self-similarity and chaos. I really enjoyed this read and agree that this could be very important for network research going forward. The emergence of these power-laws is reminiscent of Gaussian and Poisson distributions describing various environments in communication and biology - we all know how these mathematical models have guided research in those disciplines.

Directed Diffusion: A Scalable and Robust Communication Paradigm for Sensor Networks

This paper describes the directed diffusion approach for coordinated sensing and data dissemination in wireless sensor networks. The design of this new paradigm is driven by robustness, scalability and energy efficiency requirements. Data-centric communication in this scheme involves a request for "interests" followed by data propagation and local aggregation. The authors stress that energy efficiency requirements dictate the need for short-range hop-by-hop communication and local computation. Directed diffusion has a few features reminiscent of the TAG service discussed in the previous paper, such as in-network data aggregation and caching.

The evaluation suggests that novel features such as reinforcement-based best path adaptation do indeed provide substantial energy savings, especially given the task-specific nature of modern sensor networks. One interesting point of discussion would be the potential feasibility of the local approach adopted here for a general IP-based network. I found this paper interesting but it is perhaps not as essential in the syllabus as the TAG paper.

TAG: a Tiny AGgregation Service for Ad-Hoc Sensor Networks

This paper presents TAG, a distributed aggregation scheme for wireless sensor networks. Many applications of ad-hoc mote networks require the execution of declarative queries to extract meaningful information from collected data. TAG provides an efficient alternative to centralized data processing mechanisms as it reduces communication requirements and hence bandwidth consumption. Essentially, sensor nodes are organized in a routing tree structure with a root base station serving as the data processing interface to the end user. This tree structure exploits the broadcast nature of radio transmission by maintaining level information at each node - even if a sensor node loses communication with its parent, it can detect a new parent by overhearing messages from the next higher layer in the routing tree. Upon applying the query locally and aggregating results from its children, each node routes data back to the root via its parent.

One appeal of the TAG service is the use of simple SQL-style queries. The concept is also somewhat resemblant of the distributed Map-Reduce implementation of a full inverted index. In that scenario, a query can be efficiently processed by distributing the work across many mappers, each dealing with a small subset of the text base. The primary concern that came to mind while reading this paper was the effect of disconnected nodes or lost messages. The authors answer this quite thoroughly with a caching solution. This paper addresses many issues inherent in sensor networks and certainly belongs in the syllabus.

Saturday, October 11, 2008

XORs in The Air: Practical Wireless Network Coding

In line with the recent trend of papers that discuss throughput optimization over wireless networks, this paper introduces network coding theory for this very purpose in the COPE scheme. The initial example of two receivers exchanging messages via a router by XORing them into a single stream provides an intriguing preview for the reader. The authors importantly show that the network coding approach can integrate seamlessly with existing TCP and UDP protocols - unlike the ExOR mechanism of the previous paper, this one is a more immediately practical solution. That said, the throughput gains are still hindered by the presence of TCP, mainly due to congestion control restricting the queue size and limiting the impact of coding. In general, only packets that are headed to different hops and are of different lengths are XORed.

The results show that COPE provides significant gains in throughput under TCP and especially UDP, which allows router queue sizes to grow large. Moreover, coding gains are greater when uplink and downlink traffic is similar. This paper was quite interesting because it integrated information theory and networking in a way that I had not been previously exposed to. The last two papers have probably been among the most exciting thus far, in my opinion.

ExOR: Opportunistic Multi-Hop Routing for Wireless Networks

This paper presents ExOR, an integrated network-link layer protocol for the routing of large unicast transfers over multi-hop wireless networks. In traditional routing mechanisms, a single lossy link can trigger a complete retransmission. On the other hand, ExOR chooses the next hop of a route after confirming transmission on that hop. By deferring this choice, the packet transmission has multiple opportunities to make progress and is not restricted to a fixed path of loss-vulnerable points-of-failure. In essence, this dynamic approach to routing enables transmission (without unnecessary retransmissions) over long radio links with high loss rates and provides substantial increases in throughput.

For me, the discussion of interference between potential forwarding nodes was inadequate and I wasn't quite convinced by some of the assumptions that were made. Moreover, the evaluation results were quite conclusive but I would've preferred a better explanation of why the additional inter-node communication required for this scheme is not a major drawback in its adoption. The use of ETX as a path metric in determining hop priority was very appropriate given the papers we've read recently. Finally, I've been thinking about such dynamic routing mechanisms ever since I started learning about networking and wondering why it's rarely mentioned - in that sense, this paper was a very satisfying read. I'd definitely keep it in the syllabus. One last thing - the acronym ExOR (Extremely Opportunistic Routing) is pretty neat. This paper also described a good example of when the end-to-end principle can be compromised.

Tuesday, October 7, 2008

A Performance Comparison of Multi-Hop Wireless Ad Hoc Network Routing Protocols

This paper compares the performance of various routing protocols over "ad hoc" multi-hop wireless networks that do not require network infrastructure or centralized administration, such as the rooftop network discussed in a previous paper. Specifically, the DSDV, TORA, DSR and AODV protocols are evaluated over a ns-2 network simulator. The authors provide a very useful introduction to the potential applications of ad hoc networks and briefly discuss the ns-2 physical and link layer simulator. Destination-sequenced Distance Vector (DSDV) ensures loop-freedom and updates routing tables based on temporal locality of path information as well as optimal path metrics. Temporally-reordered Routing Algorithm (TORA) takes a very different approach, discovering routes on-demand and placing less emphasis on shortest-path routing. Dynamic Source Routing (DSR) completely abandons hop-by-hop routing and stores the complete, optimal node path at the source. This protocol does not require periodic route advertisement but one wonders how effective this can be over larger networks. Finally, Ad Hoc On-demand Distance Vector (AODV) combines several features of the DSDV and DSR protocols.

The simulation results indicate that while all protocols perform well in the absence of node mobility (large pause time), DSR and AODV come out on top in the presence of mobility. In terms of overhead, only the periodic DSDV routing protocol maintains a nearly constant overhead with respect to mobility rate. I really enjoyed this paper because it provided an overview of various approaches to multi-hop routing and built upon the previous few papers. The discussion was fairly straightforward as well.

A High-Throughput Path Metric for Multi-Hop Wireless Routing

This paper presents the Expected Transmission Count (ETX) metric, discussing its advantages and describing its design and implementation for DSDV and DSR routing protocols. Unlike the minimum hop-count metric, the ETX metric accounts for (potentially asymmetric) link loss ratios and interference between successive links to find the optimal path - the one with the highest throughput. The EXT metric will make such a network as the multi-hop rooftop (discussed in the previous paper) practical to implement. Initially, the authors point out the various shortcomings of the minimum hop-count metric.

Upon describing the use of the ETX metric with DSDV and DSR routing protocols, the performance is evaluated with respect to link loss ratios and interference. One thing that confused me is why Figures 2 and 6 both display crossing lines - how can the DSDV hopcount route achieve higher throughput in that small period than the best static route? Overall, this paper was interesting but several points were rather obvious. The analysis of the effect of link loss ratios was especially thorough. This probably belongs in the syllabus.

Wednesday, October 1, 2008

Architecture and Evaluation of an Unplanned 802.11b Mesh Network

This paper discusses the design of an unplanned urban rooftop 802.11b mesh network. The unconstrained node placement, omni-directional antennas, self-configuring software and other robust features make this easily deployable. Each rooftop node is hosted by a participating volunteer at a random, unplanned location. The impact of this approach is seen in the fact that this network has evolved 37 nodes in just a year with minimal administrative effort.

Upon evaluating the performance of this rooftop network, it is clear that the unplanned, multi-hop mesh network provides increased connectivity and throughput. Given that this network requires little global coordination, it appears to be a very significant breakthrough. The throughput-optimizing routing protocol was especially interesting - in addition to standard link-state flooding using Djikstra's algorithm, can the best route be chosen taking current link usage into account? Unlike many of the papers in the syllabus, this one focused on architecture rather than protocol and was an appropriate change-up in the syllabus.

Modeling Wireless Links for Transport Protocols

This paper discusses the role of wireless link modeling in the design of transport protocols. Due to non-congestion-related packet losses and the latency associated with corrective link-layer mechanisms, wireless links can have a drastic effect on TCP transport performance. The authors present that a wireless link model should be simple but realistic - as long as the model simulates the net effect of a mechanism, it need not completely represent the actual mechanism. The authors describe the topology and traffic characteristics for cellular, WLAN and satellite wireless links. Transport protocols are evaluated using "goodput", the portion of delivered data that is useful, as well as the standard performance metrics: throughput, delay, fairness and dynamics. Having introduced the work, the authors present models for corruption, delay variation, reordering, channel allocation, bandwidth variation, asymmetry, buffering and handovers.

Finally, the authors comment on the interplay between transport protocols and wireless technologies. Characteristics that are not fundamental to transport protocols (or wireless technologies) should be modified to accommodate smooth interaction with wireless technologies (or transport protocols). Having read the previous paper on TCP performance over wireless links, this one was very appropriate and provided a more balanced approach to the unison of transport-layer protocol and link-layer technology. One thing I'm still confused about is the following statement: "Furthermore, because cellular and satellite links have high latency, connections traversing them can be penalized by the bias of TCP congestion control toward flows with high RTT (Round-Trip Time)". Assuming that high latency is related to high RTT, why does a congestion control mechanism biased towards connections with high RTT "penalize" such connections?