COMET Group Theses

The following is the list of PhD Theses of graduate students who were affiliated with the COMET Group and who graduated after (including) 1990.

Permission to copy and distribute these files is granted provided that the names of the authors and their affiliations are included in every copy.

Josep M. Ferrandiz (1990)
Subrata Mazumdar (1990)
John-Thones Amenyo (1991)
Jay Martin Hyman (1993)
Yannis A. Korilis (1995)
Paul Chang (1995)
Dimitrios E. Pendarakis (1995)
Nikos G. Aneroussis (1995)
Predrag R. Jelenkovic (1996)
Mun Choon Chan (1997)
Nemo Semret (1999)
Jean-François Huard (1999)
Cristina Aurrecoechea (1999)
Xin Wang (2001)
Jonathan Rosenberg (2001)
Ping Pan (2002)
Mahesan Nandikesan (2002)
Javier Gomez-Castellanos (2002)
Raymond R.-F. Liao (2002)
Michael E. Kounavis (2003)
Andras Veres (2004)
Petar Momcilovic (2004)
Ana Radovanovic (2005)
Chieh-Yih Wan (2005)
Daniel Villela (2005)
Bong-Jun Ko (2006)
Rita Wouhaybi (2006)
Xiaowei Zhang (2006)
Seoung-Bum Lee (2006)
Boonsit (Teddy) Yimwadsana (2007)
Jing Feng (2007)
Daejoong Kim (2007)
Andreas Constantinides (2007)
Raj Kumar Rajendran (2008)
Yee Him (Patrick) Cheung (2008)
Kyung Joon Kwak (2008)
Shane Eisenman (2008)
Xiaozhu Kang (2008)
Gahng-Seop Ahn (2008)
Jian Tan (2009)


Josep M. Ferrandiz: Point Processes in Modeling, Analysis and Control of Integrated Networks

Thesis not available for downloading

Palm-martingale calculus for point processes is used to analyze problems that arise when modeling integrated networks.

We first apply the Palm-martingale calculus to Markov chains. We study a class of geometric decomposition algorithms for the computation of equilibrium probabilities and show that they provide an exact solution if and only if rate conservation between building blocks holds. The approach is applied to analyze Quasi-Birth-and-Death processes with state dependent parameters. We introduce a class of partial observation schemes for Markov chains based on time changes. The methodology leads to a probabilIStic interpretation of the process of inverting a Q-matrix. Examples including the computation of trunk utilization and the study of reliability monitors are also given.

The application of Palm-martingale calculus to non-markovian environments involve the modeling of a node that supports real-time traffic as a G/GI/m/B queue. The performance of a real-time session is measured by the packet loss probability and the average packet gap. The latter captures the packet correlation in a real-time session. We obtain closed forms for the average gap and the packet loss probability. If the packet loss is markovian, the average gap has a geometric distribution.

We study node access for real-time traffic. Node admission will be granted if the quality of service is guaranteed for the new coming session and the sessions already in progress. We model the new session and the node load by a Markov modulated point process and show that there exists a switching surface such that if the node load is below it, then the new session will receive an acceptable quality of service. Detailed results for the M/D/1/B queue are given.

N-processes are defined via time changes and multiclass N-processes are analyzed. We present a new method of analysis of queues based on a new rate conservation principle for distribution densities. We apply the methodology to study multiclass N/GI/1 queues. For the N/GI/1/ tex2html_wrap_inline1365 queue, we solve Takacs' equation. For the multiclass N/GI/1/B queue, we derive the blocking loss and the average packet gap.


Subrata Mazumdar: Knowledge-Based Monitoring of Integrated Networks for Performance Management

Readme file for the following series of PS files
Abstract and Acknowledgement
Thesis Index
1. Introduction
....Figures
2. A System Architecture for Knowledge-Based Monitoring
....Figures
3. Modeling the Environment and the Interface for Monitoring
....Figures
4. Knowledge Representation and the Organization of the Knowledge Base for Monitoring
....Figures
5. Objective-Driven Monitoring
....Figures
6. A Knowledge-Based Monitoring Environment
....Figures
7. Conclusions and Future Research Issues

In this thesis the problem of monitoring of integrated networks is addressed and a knowledge-based approach is presented as a solution to this problem.

The knowledge-based monitoring system is modeled as a real-time system where the monitor asynchronously interacts with the network through an interface. The interface is specified by a set of state variables with which sensors are attached for monitoring. The architecture of the monitor consists of a knowledge database and an inference engine for reasoning.

The semantic information about network objects and the interface are represented by the Entity-Relationship model. A computational model consisting of state variables, and a set of sample path and performance evaluation operators is defined, to describe the various processes that are associated with the state variables.

The information about the network, monitor and the interface is represented and structured in the knowledge database based on the principles of knowledge representation. The knowledge database is organized according to different time scales of the information into the following databases: configuration, sensor, dynamic and statistical. The configuration database is a one-to-one representation of the hardware and software components of the integrated network at an ``object'' level of abstraction. The sensor database is set up based on the sensors installed throughout the network as well as generic description variable-specific and object-specific sensors. The dynamic database represents the data space generated by the state and event variables in the network. The statistical database is derived from the dynamic database through a set of performance evaluation operators and represents the data space generated by the performance parameters.

An approach for sensor configuration, installation and activation for real-time monitoring is presented. An algorithm for objective-driven monitoring is introduced that selectively activates a set of sensors based on the general description of the quality of service parameters or the control objectives for resource management.

An environment for knowledge-based monitoring of integrated networks is described. The environment consists of a knowledge database with an inference engine and a three node simulator of integrated network testbed MAGNET II. The environment is used as a vehicle to validate the flexibility of the proposed knowledge-based monitoring system.


John-Thones Amenyo: Real-Time Distributed Scheduling and Buffer Management for Congestion Control in Broadband Networks

Abstract
Contents
Chapters 1-4
Chapters 5-6, References

This thesis introduces and studies a new class of distributed algorithms called proactive controls for congestion control of broadband networks. A large scale broadband network has a high bandwidth-delay product. Hence, reactive controls are impractical for broadband networks. However, it also insufficient to rely exclusively on preventive controls for congestion control in broadband networks. Despite network-edge traffic policing, it is still possible for aggregated and focussed loads from well-behaved sources to cause network overload.

The proactive control scheme exploits the highly correlated nature of the complex traffic streams that are expected to be carried by broadband integrated networks. At each network node, a proactive control scheme does traffic prediction on the incoming streams. The horizon of traffic prediction is chosen to be sufficiently high so as to obviate the problem of large propagation delay in broadband networks. The predictions are then used determine future congestion at a node. Whenever congestion is anticipated, backward congestion notification messages are sent from the network node to its upstream neighbors. Then these neighbors adapt their outgoing flows, for example by means of selective cell discarding within quality of service constraints, to help ameliorate the congestion.

The thesis studies cooperative distributed scheduling algorithms based on proactive controls for broadband networks designed according to the Asynchronous Time Sharing network architecture concept. It is shown that the cooperative scheme gives better performance than non-cooperative distributed schemes with respect to schedulable regions. A rationale is also advanced for the effectiveness of the proactive control scheme, based on the concepts of stress balancing and traffic statistics modification.

The thesis also studies adaptive, dynamic buffer allocation to traffic classes at network nodes. The buffer management scheme studied was chosen to meet real-time constraints imposed on the buffer manager's reaction to buffer overflows.

Finally, the thesis presents numerical techniques to compute quality of service performance measures in cyclic service multiqueue models with finite buffers and limited service disciplines. A technique is provided to decompose cyclic service multiqueue models into single queues with server vacations.


Jay Martin Hyman: Real-Time Scheduling and Admission Control in Broadband Networks

Abstract
Contents
1. Introduction
2. Scheduling
3. Admission Control
4. VC and VP Assignment Strategies
5. Conclusions and References

The broadband networks currently under development will form the backbone of an integrated services packet network, over which traffic of many different types will be multiplexed together. Given that each service may be characterized by different cell arrival statistics and have a different definition of ``quality of service'' (QOS), what control structures can guarantee the requisite quality of service to each class while making efficient use of network resources? This dissertation addresses this problem at two levels of the network control stack, scheduling and admission control. Scheduling mediates the contention between cells of different classes at each individual multiplexer, and can be used to help guarantee cell-level QOS requirements, which may include bounds on cell loss rates, cell delays, and the like. Admission control serves to protect the scheduler from overloads that would cause violations of cell-level QOS. In addition, it mediates the contention for resources between calls of different classes, in response to their respective QOS requirements at the call level.

Our treatment of scheduling algorithms leads us to the concept of the schedulable region, which defines the amount of traffic that a particular scheduler can successfully carry, given the traffic characteristics and QOS requirements. The schedulable region is shown to be the fundamental element by which the scheduler and admission controller can be separated into a layered architecture. It allows the admission control problem to be formulated with no further reference to cell-level performance issues. The admissible load region is then shown to quantify the joint performance of a particular choice of scheduling and admission control policies operating within this framework.

The extension of this approach to a network environment may involve the decentralization of the admission control function. We define the resources associated with a virtual path by a contract region that specifies how many calls it may carry of each class. Through the development of a calculus of regions, we ensure that each network link will always operate within its schedulable region, even under this decentralized admission control scheme. Finally, we show how this concept applies to arbitrary networks supporting virtual private networks.


Yannis A. Korilis: Architecting Noncooperative Networks

Abstract, Contents
1. Introduction
2. Existence of Equilibria in Noncooperative Optimal Flow Control
3. Capacity Allocation under Noncooperative Routing
4. Achieving Network Optima Using Stackelberg Routing Strategies
5. Conclusions, Future Research and References

Control decisions in large-scale networks are often made by each user independently according to its individual performance objectives. Such networks are called noncooperative, and game theory provides the systematic framework to study their behavior. The operating points of a noncooperative network are the Nash equilibria of the control game. The goal of this dissertation is twofold: (i) to characterize the structure of the network operating points, and (ii) to introduce a unified methodology for architecting noncooperative networks so that their operating points exhibit certain desirable properties.

Network control games are typically non-classical constrained games, due to the presence of stability and/or quality of service constraints. Using flow control as a paradigm, existence of equilibria is investigated for a class of control games where the constraints couple the strategy spaces of the users in a way that does not allow the application of standard equilibrium results from the theory of games. A more general methodology to study equilibrium existence, based on the best reply correspondence of the control game, is introduced. The best reply correspondence of the flow control game is shown to be a function that, under appropriate conditions, has a fixed point that is a Nash equilibrium of the flow control game.

Noncooperative equilibria are inherently inefficient and lead, in general, to suboptimal network performance. Employing a new approach to network design and management, two strategies are devised for architecting the operating points of a noncooperative network, so that its overall performance is improved. In the provisioning phase, the network designer architects the resource configuration of the network, by implementing a capacity configuration that leads to systemwide efficient routing equilibria. The optimal capacity configuration is specified explicitly for a class on network topologies. In the run time phase, a manager controls part of the network flow. The manager is cognizant of the noncooperative behavior of the users and architects the flow configuration of the network by driving them to an equilibrium point that improves the overall network performance. For a class on network topologies, necessary and sufficient conditions for the manager to enforce an equilibrium that coincides with the global network optimum are derived. The strategy of the manager that implements that optimum is shown to be unique and is specified explicitly.


Paul Chang: A Connection-Oriented Work-Conserving Packet Scheduling Architecture for BISDN

Thesis not available for downloading

Broadband Integrated Service Digital that are being deployed offer many distinct advantages over conventional circuit switching or time division multiplexed networks. The advantages include uniform network access, economics of scale, and improved bandwidth utilization due to statistical multiplexing gain. On the other hand, the introduction of broadband-ISDN also presents many new technical challenges in the design of network control and management functions, especially in terms of providing quality of service (QoS) guarantees to individual network connections. Since many consider it is beyond current technology to provide QoS guarantees to hundreds of thousands of network connections, many packet scheduling designs discussed in the literature have mainly focused on providing QoS guarantees on the basis of traffic classes. Unfortunately, there are still many unanswered questions under this paradigm such as:

From the network users' point of view, they have no concern as to how networks categorize their connections as long as they are satisfied that networks can deliver the QoS guarantees sufficient for their individual connections. Therefore, we feel it is the ultimate goal of the broadband-ISDN design to provide QoS guarantees for their network users on a connection basis. As the first step toward the ultimate goal, we need to explore alternate network switching system hardware architectures in order to overcome the design complexity of providing connection-oriented QoS guarantees in broadband-ISDNs. The alternate hardware architectures should allow for the design flexibility of applying the most advanced micro-electronic technologies to exploit concurrent data flow structures inside the network switching systems in order to support high data throughput required for high-speed transmission links in the broadband-ISDN.

In this dissertation, we study the fundamental packet scheduling structure underlying the recently proposed QoS-driven connection-oriented packet scheduling disciplines including Virtual Clock, Fair Queueing, Earliest Due Date, and General Processor Sharing. We conclude that although the connection-oriented work-conserving packet scheduling disciplines may exhibit different packet scheduling characteristics, they share a common packet scheduling framework based on a sorted priority queue structure. However, the infeasibility of the sorted priority queue design has prevented the proposed packet scheduling disciplines from being implemented in the broadband-ISDNs. To overcome the design complexity in the sorted priority queue structure and, at the same time, provide a flexible and expandable hardware platform for packet scheduling disciplines, we propose a new and, yet, feasible packet scheduling architecture as a part of the alternate broadband-ISDN switching system hardware architectures. The proposed packet scheduling architecture can handle hundreds of thousands of packets and network connections at the same time and, still exhibit almost identical packet scheduling characteristics as a work-conserving packet scheduler. We, therefore, name the proposed packet scheduling architecture the Virtual Work-Conserving (VWC) packet scheduling architecture.

To obtain an intuitive understanding of the fundamental packet scheduling characteristics, we perform two packet waiting time analyses with a Poisson packet arrival process on the VWC packet scheduling architecture. To gain a further understanding of the overall system-level behaviors, we conduct extensive packet scheduling performance simulation studies with different packet scheduling disciplines and simulated traffic models on the VWC packet scheduling architecture. From these analyses and simulation studies, we establish an operating range for the VWC packet scheduling architecture. To investigate the design complexity of packet scheduling operations on high-speed transmission links, we transform the VWC packet scheduling architecture into a unique hardware design, which explores the high density device characteristics of CMOS memory technologies. To illustrate the feasibility of the VWC packet scheduling architecture, we implement the VWC packet scheduling hardware design using off-the-shelf CMOS components and asynchronous SRAM modules. The hardware implementation, when deployed into the ATM networks, can sustain a link throughput of 3.53 Gbps with 15-ns SRAM module and 0.8-um ASIC technology or 1.06 Gbps with 35-ns SRAM and 0.6-um FPGA technology. To further illustrate the scalability of the VWC packet scheduling architecture, we expand the hardware implementation of the VWC packet scheduling design using different values of several key design parameters in the VWC packet scheduling architecture. From the expanded hardware implementations, we derive the relationship between each key design parameter and the hardware-complexity/critical-timing of the VWC packet scheduling architecture. Combining the results from our research network switching system hardware structure designs, we demonstrate that, with the VWC packet scheduling architecture, the QoS-driven connection-oriented packet scheduling disciplines can be deployed or implemented in the broadband-ISDN switching systems.


Dimitrios E. Pendarakis: On the Tradeoff Between Transport and Signaling in Broadband Networks

Abstract, Table of Contents
1. Introduction
2. Real-Time Video Traffic Modeling and Analysis
3. Virtual Path Bandwidth Allocation in Multiuser Networks
4. An Adaptive Virtual Path Allocation Policy for Broadband Networks
5. Conclusions and Directions for Future Research

In broadband, Asynchronous Transfer Mode (ATM) networks processing costs will increase due to the introduction of new services with a wide range of traffic profiles and quality of service requirements, and the demand for interoperability among different networks and protocols. At the same time transport costs per unit of information will decrease because of the use of high speed optical transmission systems. These facts emphasize the importance of the fundamental tradeoff between capacity of the transport plane and capacity of the signaling plane in broadband networks. Investigation of this tradeoff is the central topic of this dissertation. Two main topics are addressed.

First, we develop an understanding of the traffic that these networks will carry, in particular real time video, and identify the traffic parameters that impact the ability of the transport plane to provide quality of service to this traffic. We propose a new model for VBR video traffic that accurately captures the behavior on three different time scales: scene, frame and slice. Using both this model and actual video data, we conduct experiments which show that for real-time scheduling the impact of long term or interframe correlation is negligible, while the impact of short term or intra-frame correlation is significant.

The second part of this thesis presents algorithms for Virtual Path (VP) bandwidth allocation, with the objective of achieving an optimum balance between processing and transport costs. Two different aspects of the problem are examined leading to two classes of algorithms. The problem of VP capacity allocation in multi-user networks shared by noncooperative users is first studied. We assume that each user sets up virtual paths that optimize its own, selfish, performance measure. This measure accounts for both the guaranteed call level quality of service, as well as for the cost incurred for reserving the resource. The interaction among the user strategies is formalized as a noncooperative game. We show that this game has a unique Nash equilibrium and that it possesses a certain fairness property. We next examine how controls should be distributed between users and the manager of the network and what classes of distributed algorithms can be used to allocate bandwidth to VPs. Convergence to the unique Nash equilibrium point is shown for both a Gauss-Seidel and a Jacobi scheme.

We next present a class of algorithms which operate on a faster time scale, at the expense of using more localized information. Based on a threshold scheme the size of a VP is dynamically adjusted to reflect changes in the VP occupancy. In each virtual path controller the thresholds are chosen so as to keep bandwidth utilization high, while obtaining a low rate of processing requests. Two novel ideas are used in our threshold scheme: adaptivity, which results in a better prediction of future bandwidth requirements; and hysteresis, which prevents excessive processing of requests due to oscillations around thresholds. This class of algorithms encompasses both the pure VC and the fixed VP bandwidth policies as special cases. Our results show that our scheme significantly improves upon previously suggested approaches.

Finally, we present some directions on how this research can be extended and complemented.


Nikos G. Aneroussis: Managing Virtual Circuit and Virtual Path Services on ATM Networks with Quality of Service Guarantees

1. Abstract, Table of Contents and Introduction
2. An Architecture for Managing Virtual Circuit and Virtual Path Services on ATM Networks
3. Virtual Path Control for ATM Networks with Call Level Quality of Service Guarantees
4. A Framework for Pricing VC and VP Services in ATM Networks
5. Conclusions and Future Work

We live in an era when computer and telecommunications networks are exploding both in terms of volume and complexity. The large number of network services with Quality of Service guarantees will be a key feature of these networks. Management, and particularly service and performance management is becoming of major importance. This dissertation examines the integration of the service and performance management tasks under Quality of Service constraints.

Asynchronous Transfer Mode (ATM) networks are expected to provide two basic unicast connection services: Switched Virtual Circuit (VC) and Virtual Path (VP). These services can be used to create higher level services, such as Private Virtual Networks. Especially the configuration of Virtual Path (VP) connection services is expected to play an important role in the operation of large scale ATM networks. They can be used by the network operator to optimize the call level performance of the broadband network, or provided as a service to network users.

Based on a generic architectural model for providing connection services in broadband networks, we present an architecture for integrating the VC and VP services into the network management system. Complete definitions and behavioral descriptions of Managed Object Classes are given. An experimental platform on top of the XUNET III ATM network provides the proof of concept. The Xunet network manager is equipped with the necessary monitoring tools for evaluating the performance of the network and controls for changing the parameters of the VP connection services. Performance data from Xunet is presented to highlight the issues underlying the fundamentals of operation of the VP management model such as the trade-off between throughput and call processing load. A major research challenge is to develop an understanding of this fundamental trade-off and provide an algorithm for VP capacity allocation that achieves an optimal network operating point while guaranteeing Quality of Service at the call level and satisfies a priori bounds on the processing load of the call processors.

We present a taxonomy of previous approaches to the problem and identify their strengths and weaknesses. Based on these observations, we provide an algorithm for the VP distribution problem that satisfies nodal constraints on the signaling processing load and blocking constraints for each Source-Destination pair. The algorithm maximizes the network revenue under the above set of constraints regardless of the number of traffic classes in the network, the method of representation of networking resources, the admission control policy used in every link and VP, and the network routing scheme. The algorithm is applied to three sample networks and its performance characteristics are studied. In one case, the VP distribution algorithm is applied to the Xunet ATM testbed and the predicted performance is verified experimentally.

As a complement to the VP service control framework, we examine pricing schemes for the VC and VP services. Pricing can be used by the network manager as a control mechanism to influence the user demand for services, thereby preventing network overloads; it can also be used by the network operator as a means of increasing its revenue. We propose a model for characterizing the user demand in an environment where users can request any combination of VC and VP services. The model is applied to a large user population, and a pricing adjustment scheme is derived for the network operator that senses the user reaction to service prices for every Source-Destination (SD) pair and drives the users to a point where the network maximizes its revenue. This scheme has the characteristics of a game and the final pricing solution is a Nash equilibrium. A second level of optimization is introduced, where the network manager distributes the available capacity between SD pairs in order to maximize the total network revenue. Finally, we examine the user demand optimization problem from the Virtual Network (VN) user standpoint. We propose an algorithm that receives as input the configuration of the VN and the offered load and provides the VP capacities with the minimum overall cost. Experimental results are provided to illustrate the effects of pricing on Virtual Network design.


Predrag R. Jelenkovic: Multiple Time Scales and Subexponentiality in Broadband Networks

Thesis

The main theme of this dissertation is the evaluation of the capacity of broadband multimedia network multiplexers. This problem calls for the modeling of network traffic streams and the analysis of a network multiplexer that is loaded with the corresponding models.

For modeling we focus on MPEG video traffic streams that are expected to be predominant in the traffic mixture of future multimedia networks. We experimentally demonstrate that real-time MPEG video traffic exhibits multiple time scale characteristics, as well as subexponential first and second order statistics. Then we construct a model of MPEG video that captures both of these characteristics and accurately predicts queueing behavior for a broad range of buffer and capacity sizes. Depending on whether a network multiplexer (loaded with MPEG) is strictly or weakly stable the dominant effect on the queueing behavior arises from the multiple time scale or subexponential structure, respectively. Correspondingly, we identify two general classes of stochastic processes, to which the constructed MPEG model belongs, and advance mathematical tools for the associated queueing analysis.

First, we consider a class of multiple time scale processes. When these processes are loaded into a strictly stable queue, buffer occupancy probabilities exhibit a distinctive functional behavior that, as we explain, results from their multiple time scale structure. In this case, we demonstrate that the Dominant Root queueing approximation (also called Equivalent Bandwidth (EB)) may be off by orders of magnitude, and that an EB-based admission control policy might be too conservative. Furthermore, we show that the EB approximation does not depend on the slow time scale statistics and is equal to the case when the arrival processes stay in the highest activity states all of the time. This explains the observed mismatch between the EB approximation and experimental results. In order to alleviate this problem, we develop an asymptotic expansion (Perturbation Theory) technique for approximating all of the buffer probabilities.

Second, for a class of subexponential (non Cramer type) processes, we extend a well known probability/queueing result by proving that the asymptotic behavior of a subexponential GI/GI/1 queue is invariant with respect to Markov modulation. Then we construct a class of subexponentially correlated processes for modeling MPEG video sources. When these processes are fed into a fluid flow queue, we prove that the queue length distribution is directly asymptotically proportional to their autocorrelation function.

For the problem of multiplexing a large number of On-Off sources with subexponential (long-tailed) On periods, we derive precise and bounding asymptotic results for approximating large buffer occupancy probabilities. Based on these asymptotic results, we suggest a practical approximation method for the case of finitely many multiplexed sources. The efficacy of the method is demonstrated on extensive numerical examples. In addition, when subexponential On-Off sources are multiplexed with exponentially bounded sources, the dominant effect for the large buffer probabilities is due to subexponential sources. This might have a practical implication for multiplexing voice (exponential) and video (subexponential) traffic sources.

Finally, we propose both multiple time scale and subexponential approximation techniques for efficient capacity evaluation and admission control in broadband multimedia networks.


Mun Choon Chan: Architecting the Control Infrastructure of Multimedia Networks

Thesis (pdf)

This thesis addresses some of the fundamental problems related to architecting the control infrastructure of a broadband Asynchronous Transfer Mode (ATM) network. Specifically, the issues of service provisioning and signaling performance are addressed. For service pro visioning, the focus is on developing an ATM Virtual Private Network (VPN) architecture. The objective is to find the appropriate trade-off between user-network interaction complex ity and the degree of user control. For signaling, using an open network control model, var ious approaches to increasing the performance and scalability of the signaling infrastructure are studied. The objective is to design a high performance connection management frame work on top of a flexible distributed processing platform.

Although an ATM VPN is not the same as an end-to-end Virtual Path (VP) or Virtual Circuit (VC), much of today's providers provision these services the same way. As a result, customers have limited control over the VPN and cannot exploit knowledge about their own traffic statistics. The proposed solution is a Virtual Path Group (VPG)-based ATM VPN ser vice, which reduces the customers' dependency on the provider and allows the customers to implement control objectives and schemes according to their own requirements. A VPG- based VPN provides a very good intermediate point between the use of leased line, where there is minimum network and user interaction, and the use of VC setup, where every change in the user domain must be known to the network provider. The management and control ar chitecture of the resulting ATM network is designed so that it is independent of the charac teristics of the underlying public network services, and guarantees quality of service. The control architecture of the VPN, which runs within the customer domain, is structured into three layers of control, according to different time-scales. Each control layer is modeled by a generic controller design concept which allows a large class of control objectives and con trol schemes to be implemented.

The VPN control architecture has been implemented on a network emulator that con tains a parallel simulation kernel. The network emulation platform supports real-time visu alization and interaction, and an implementation using up to 128 processors was evaluated. Using a prototyping concept that allows better integration between simulation studies and software development, a significant amount of the software developed on the network emu lator are re-used on the target platform, which consists of a network of heterogeneous ATM switches.

Using a network programming environment where there is an open and uniform ac cess to abstractions that model the local states of networking resources, the work on signal ing focuses on designing a high performance connection management framework which exploits the programmability of the network. The amount of remote invocations is reduced drastically by caching of network states and by aggregation of access to remote objects. Fur ther, the impact of latency incurred in remote invocations is reduced through parallelization. For various system configurations, parameter values for controlling the amount of caching and aggregation are proposed so that the system can be dimensioned appropriately for the desired trade-off between call setup latency and throughput. The system is implemented in C++ and runs on general purpose UNIX workstations. Its performance is comparable to the switching performance in a backbone network. For heavy traffic loads, call throughput of ap proximately 106 calls/hr can be attained, and for light traffic loads, call setup latency of about 10ms can be achieved.


Nemo Semret: Market Mechanisms for Network Resource Sharing

Thesis (pdf) Thesis (.ps.gz)

The theme of this thesis is the design and analysis of distributed and decentralized market mechanisms for resource sharing in multimedia networks. The motivation for the market-based approach is twofold. First, in modern multiservice networks, resources such as bandwidth and buffer space have different value to different users, and these valuations cannot, in general, be accurately known in advance as users compete against each other for the resources. Second, the network resources themselves are distributed, and often, not subject to any single authority.

We present the Progressive Second Price auction (PSP), a new decentralized mechanism for allocation of variable-size shares of a resource among multiple users. Under elastic demand, the PSP auction is incentive compatible and stable, in that it has a ``truthful'' $\epsilon$-Nash equilibrium where all players bid at prices equal to their marginal valuation of the resource. PSP is efficient in that the equilibrium allocation maximizes total user value. In a dynamic setting, we derive a bound on the time to converge to equilibrium, when users are using an optimal normal form strategy. We then generalize PSP to be applied by independent resource sellers on each element of a network with arbitrary topology, with players having arbitrary but fixed routing/provisioning constraints. We derive an optimal truthful strategy for coordinated bidding for a player participating in auctions on multiple resource elements, and show that the equilibrium and efficiency results still hold. We also show how our networked auction model can apply to virtual networks, virtual paths, and edge capacity allocation networks.

We then turn our attention to the problem of reservations and admission control for connection oriented network services. We propose a new approach to pricing of capacity in service systems with blocking, using spot and derivative market mechanisms. A second-price auction among arrivals grouped in batches gives rise to the {\em spot market} of usage charges. A reservation guaranteeing access for an arbitrary duration with a usage price below the bid can be made at any time before or during service, thus eliminating the risk -- inherent to the spot market -- of being dropped before service completion. We define the reservation as a {\em hold option}, which is analogous to derivative financial instruments (e.g. options, futures) integrated over time. Based on a heavy-traffic diffusion model for the corresponding two-stage queueing system, we compute the reservation fee as the fair market price of a hold option. We validate this approach with simulations driven by a real traffic trace at a dial-up Internet access modem-pool.

Finally, we present a decentralized, distributed, flexible software architecture implementing the above pricing systems.


Jean-François Huard: A Programmable Architecture for Object-to-Object Communications with Quality of Service Guarantees

Thesis (pdf)

Emerging multimedia applications based on object-oriented technologies require many communications channels with quality of service (QOS) guarantees to be established and released at high rate between two hosts. The connection setup requirements imposed by these applications may overload the signaling system of connection-oriented networks, and, if the communications are not maintained for a long enough period of time, the overhead of connection setup may be relatively high. Central to this thesis, is the presentation of a general model for object-to-object communications with QOS guarantees that overcomes this problem.

The model consists of adding a multiplexing layer above the transport layer in the transport protocol stack. The multiplexing layer bundles many communications channels with similar QOS requirements into one transport connection. For supporting the link bandwidth requirements of all the multiplexed object-to-object communications channels, an adaptive bandwidth allocation algorithm that balances between the link bandwidth allocated to the transport connection and the network signaling load required to adjust its capacity is presented.

For controlling and managing the multiplexing layer, a signaling system based on MPEG-4 Delivery Multimedia Integration Framework (DMIF) is presented. The signaling system supports the creation and deletion of multiplexers, the association of transport connections to multiplexers, and, the establishment and release of channels with QOS guarantees within multiplexers.

For selecting the best networking technology available based on QOS requirements, a QOS mapping framework is presented. Rules for translating applications' loss and delay requirements into network packet loss and delays are given. The rules are derived theoretically and validated experimentally.

To accomodate the different delivery requirements posed by emerging multimedia applications, a programmable transport architecture able to dynamically create protocol stacks that suits the applications' QOS requirements is presented. The architecture allows the flexibility to create, deploy and manage services such as accounting, QOS monitoring and QOS negotiation in response to applications' QOS requirements. By making the service space programmable, the process of automating all of the object-to-object communications systems requirements is elegantly addressed.

As a proof of concept, an instance of the realization of the MPEG-4 DMIF is presented. The realization of DMIF is greatly facilitated by xbind , a broadband kernel that provides connection management, routing, QOS mapping and intelligent selection of suitable transport protocol stacks. The resulting system, called XDMIF, supports different QOS requirements and the rapid creation and release of numerous transport channels along with efficient management of networking resources.


Cristina Aurrecoechea: Modeling Service Management for Programmable Architectures

Thesis (pdf)

The concept of Service Management was first described in the TMN (Telecommunications Management Network) framework. TMN was aimed at public networks that provided a small and well defined set of telecommunication services. However, this old architecture can not adequately deal with current demands brought on by the recent explosion of distributed multimedia applications and services. New telecom architectures, called Programmable Architectures, are necessary to facilitate a rapid and flexible provision of new services with QoS capabilities and pose new challenges to management that are only beginning to be addressed.

This thesis presents a top-down approach to understanding service management in the context of programmable architectures. I propose a standard language to describe networks and services based on graphs; this allows us to define the control and management activities on them as graph construction and manipulation activities. This graph model helps in defining APIs for the management of multilayered networks as well as multimedia services.

I establish a methodology to compare programmable architectures based on the graph model. Specifically I compare TINA and xbind as two important representatives of programmable architectures. TINA represents the direction in which telco operators are moving, beyond IN (Intelligent Networks), while xbind is an example of an open programmable network. I show that the precise definition of services as graphs leads to a foundation of comparative analysis between programmable architectures that guarantee QoS.

Finally, I address the design and implementation of a multicast service in which to test the management model mentioned above. We tested the prototype on two platforms: an emulation testbed to address scalability issues, and a real testbed (xbind) to experiment with a real transport service. To speed up the prototyping process we designed and implemented a new software environment, TeleSoft, that facilitates the move of source code between platforms. The prototyping work proved the validity of the management model and that it may be generally applicable.


Xin Wang: Scalable Network Architectures and Measurements for Multicast and Adaptive QoS

Thesis (pdf)

New IP services and applications have diverse and stringent bandwidth and quality of service requirements. It is difficult and costly to predict these requirements and add sufficient capacity to provide reliable and high quality service. This thesis is broadly concerned with scalable and efficient architectures for delivering Internet applications reliably and with high quality. The work consists of two parts.

We have developed a flexible, scalable service framework to utilize existing network capacity efficiently. The framework is based on conditioning the network to provide multiple services with certain quality of service expectations even under high demand, using short-term, dynamic configuration of network resources for better response to user demand and more efficient resource usage. To support this framework, we develop a Resource Negotiation and Pricing Protocol (RNAP) to enable the user and the network (or two peer networks) to dynamically negotiate services, and to formulate and communicate prices and charges.

As part of the service framework, we also propose a pricing model in which services are priced based on QoS (resources consumed) and user willingness-to-pay. The model also motivates rate and service adaptation by applications with elastic demand through congestion-sensitive pricing of certain services, while providing more expensive services with static pricing for non-elastic users. We describe two mechanisms to develop a congestion-sensitive price component, one based on a tâtonnement process, and the other on a second-price, multiple-bid auction. We also describe a user rate-adaptation model in response to price changes, based on maximization of the user-perceived benefit.

We have demonstrated the functionality of RNAP on a testbed network, and have shown that the framework can achieve high utilization and control congestion, while maintaining a stable price. We also present simulation results to show that our proposed framework achieves a lower connection blocking rate, higher user satisfaction based on user utility functions, and higher network revenue. The performance improvement increases with the number of connections. The auction-based congestion pricing mechanism is seen to provide higher network utilization than the tâtonnement-based mechanism, at the cost of higher complexity.

The second part of the thesis is a study of the performance of the Lightweight Directory Access Protocol (LDAP). LDAP is potentially useful for the management of network resources, and the administration of traffic-handling and pricing policies. We describe a benchmark tool to study LDAP performance in a dynamic environment. The tool provides a detailed profile of the latency and throughput contributions of system components. We report measured performance using a LDAP schema proposed for administration of Service Level Specifications in a differentiated network. In most cases, the connection management latency increases sharply at high loads and thus dominate the response time. The TCP Nagle algorithm is found to introduce a very large additional latency, and it appears beneficial to disable it in the LDAP server. The CPU capability is found to be significant in limiting the performance of the LDAP server, and for larger directories, which cannot be kept in memory, data transfer from the disk also plays a major role. The scaling of server performance with the number of directory entries is determined by the increase in back-end search latency, and scaling with directory entry size is limited by the front-end encoding of search results, and, for out-of-memory directories, by the disk access latency.


Jonathan Rosenberg: Distributed Algorithms and Protocols for Scalable Internet Telephony

Thesis (pdf)

Internet telephony service is defined as the provision of real-time, interactive, multimedia telecommunications services between human users, using the public Internet.

The most difficult problem in providing Internet telephony is to overcome the increased jitter, delay, and loss (as compared to circuit-switched networks) suffered by voice. Past work has separately investigated Forward Error Correction (FEC) and playout buffer adaptation mechanisms to resolve these problems. We show that these mechanisms must be considered jointly. We propose and simulate a number of algorithms for integrating FEC into playout buffer adaptation schemes, and show that they are superior to non-integrated algorithms.

Receiving feedback about network transport quality is essential for supporting adaptive applications. We examine the issues surrounding scalability of transport feedback in large scale multicast groups. We present, analyze, and simulate a class of algorithms termed reconsideration, which support congestion controlled feedback in highly dynamic groups, and then show how the memory requirements of our algorithms can be reduced.

We consider signaling protocols for providing call establishment, management, features, and applications. After an analysis of existing Internet telephony signaling protocols, we propose a new protocol, the Session Initiation Protocol (SIP), which overcomes the limitations of existing protocols. We describe an implementation of this protocol in software, and discuss applications we have built with it.

We consider interconnection with the telephone network, and focus on the problem of discovery of telephony gateways. We show that this is a subset of a broader wide area service discovery problem. After reviewing existing protocols for resource discovery (and finding them lacking for wide area applications), we present a scalable protocol for wide area service discovery, which is ideal for discovery of gateways, amongst other resources.

Finally, we consider the problem of a service architecture for Internet telephony, which provides features and complex applications to users. We review the service architectures that have been presented in the literature. We then propose our architecture, the application component architecture, which combines the best aspects of existing work. We show how this architecture can be used to provide several complex applications.


Ping Pan: Scalable resource reservation signaling in the internet

Thesis

Resource reservation protocols were originally designed to signal end hosts and network routers to provide quality of service to individual real-time flows. More recently, Internet Service Providers (ISPs) have been using the same signaling mechanisms to set up provider-level Virtual Private Networks (VPNs) in the form of MPLS Label Switched Path (LSP). It is likely that the need for reservation signaling protocols will increase, and these protocols will eventually become an indispensable part of Internet service. Therefore, reservation signaling must scale well with the rapidly growing size of the Internet.

Over the years, there have been debates over whether or not there is a need for resource reservation. Some people have been advocating over-provisioning as the means to solve link congestion and end-to-end delay problems. The over-provisioning argument is largely driven by the expectation that the bandwidth price will drop drastically. From our investigation, however, we found that many end users have not been benefiting from over-provisioning: the current Internet has bandwidth bottleneck links that can cause long-lasting congestion and delay. At the same time, leased line cost has not been reduced sufficiently in a timely manner for many network providers to deploy high-speed links everywhere in their networks.

Applying resource reservation brings many benefits to the network users. Unfortunately, the current resource reservation framework has scalability problems in terms of storage, bandwidth, message processing and manageability. To address these problems, we first evaluate methods that are designed to improve the scaling properties in RSVP. Though some of the methods can reduce protocol processing overhead substantially, they do not reduce the total number of reservations in the network. Thus, we argue that merely enhancing the existing signaling protocols may not be sufficient.

Generally, scalability problem can be solved by building a hierarchy. Resource reservation signaling is no exception. Depending on traffic behavior and service requirements, we propose a hierarchical reservation model that will support reservation signaling capability at end-user's application layer as well as at network provider's backbone level. In the model, end users may use lightweight signaling protocols to setup reservations for short-lived real-time applications. Within the network, service providers, based on bilateral agreements, establish long-lasting and more static reservation “trunks” among each other. At the network edge or border, end-user reservations are aggregated into provider's reservation “trunks”, depending on user's qualification and network resource availability.

To explore our understanding on lightweight signaling, we introduce YESSIR, a simplified application-layer reservation protocol. It is designed to establish reservations for real-time streaming traffic. To simplify the processing at routers, YESSIR uses one-pass signaling sequence and allows data senders to initiate reservations. YESSIR also uses partial reservation and reservation retry techniques to speed up the setup. Our implementation results show that with proper protocol design and implementation, network routers can support a large number of user reservations (10,000 reservation requests per second on a FreeBSD prototype).

One of the most challenging aspects on provider-level signaling is that the protocol needs to be applicable and scalable to potentially all network providers in the Internet. After evaluating traffic traces from the Internet backbone, we derive a sink-tree algorithm, where the reservations from other providers following inter-domain routing path to a destination provider's network form a tree, rooted at the destination provider's border router. The sink-tree approach has the property that the maximum number of reservations at network routers is always O(N), where N is the total number of routing domains in the network. This should reduce the total number of inter-domain reservations to a manageable level. We present an inter-domain reservation protocol, BGRP, that is based on the sink-tree algorithm. BGRP also has several built-in features that allow fast setup and make it resilient to route flapping.


Mahesan Nandikesan: On the Foundations of Network Programmability

Thesis

In recent years, programmability in telecommunication networks has been recognized as a powerful paradigm for realizing network services. Yet, no systematic foundation for capturing network programmability has been laid. The present thesis attempts to fill this void.

The essence of network programmability is the prevalence of programming interfaces for altering the network state to create and remove communication channels with quality of service. The thesis lays the foundations of network programmability in terms of (i) a core network resource model, (ii) a network-element service model, (iii) topology abstraction, (iv) a network service model, (v) APIs for accessing these models, and (vi) APIs for signaling.

The thesis identifies communication channels (modeled as graphs) and their manipulation as the essence of network control. It presents concrete resource and service models and APIs for building communication graphs, applicable to connection-oriented and connectionless networks, packet-switched, circuit-switched and wavelength switched networks. This is the first time such generality has been achieved. Yet, the models and APIs are very compact. The thesis also presents peer-to-peer APIs for network resource discovery. It introduces the concept of minimality of an API, and shows that the APIs presented are minimal.

The thesis identifies two fundamental capabilities of a network, which are necessary and sufficient to realize a network control infrastructure. A third fundamental capability is shown to help performance optimization.

The benefits of the foundations are (i) conceptual clarity resulting from the identification of network control as graph manipulation, (ii) programming simplicity, and (iii) separation of control algorithms from the network state. Of these, the first two are original contributions of this thesis. The following are demonstrated: The APIs resulting from the foundations greatly simplify the task of writing software for network resource discovery. The source code is much smaller, clearer and reflects high-level concepts much more clearly. A simple RPC-like ORB is adequate for accessing these APIs remotely- not even a name service is necessary. The performance of a system implemented with these APIs is only slightly lower than that of one written directly in terms of protocols.


Javier Gomez-Castellanos: Energy-Efficient Routing and Control Mechanisms for Wireless Ad Hoc Networks

Thesis

This thesis contributes toward the design of new network layer protocols for emerging wireless ad hoc networks based on a foundation of variable-range transmis- sion control. Wireless ad hoc networks represent autonomous distributed systems that are infrastructureless, fully distributed and multihop in nature. The thesis begins by investigating some fundamental tradeoóand performance limits of wire- less ad hoc networks based on common-range transmission control. We show how an alternative approach that uses variable-range transmission control can improve the overall performance of the wireless network. The advantage of using variable- range transmission control is twofold. First, it can reduce the overall transmission power in the network, therefore, increasing the energy savings of wireless devices that are typically energy limited. Second, it can increase the traÆc carrying capac- ity of these wireless networks in comparison to existing systems that are based on common-range transmission control.

More specifically, we study the impact of transmission power on the physical and network connectivity, network capacity, and power savings from a mathematical perspective. We make a number of fundamental contributions. First, we show that the use of variable-range transmission approaches achieve lower transmission power levels compared with the minimum transmission power levels that can be obtained by common-range transmission based routing protocols. Second, we show that a variable-range transmission policy can maintain constant per-node capacity even when more nodes are added in a fixed area network. Third, we derive a model that approximates the signaling overhead of a routing protocol as a function of the transmission range and node mobility for both route discovery and route maintenance.

Based on the results and insights from the first part of the thesis we propose a net- work level routing scheme called PARO. We present the design, analysis and imple- mentation of PARO, which represents a new approach to dynamic power controlled routing that helps to minimize the transmission power needed to forward packets between devices in wireless ad hoc networks. Using PARO, one or more intermediate nodes called "redirectors" elects to forward packets on behalf of source-destination pairs thus reducing the aggregate transmission power consumed by wireless devices. We use a combination of analysis, simulation and results from an experimental wire- less testbed implementation of PARO to show the performance of our approach. We discuss the limitations of existing radio and MAC technology in realizing the initial goals of the protocol using othe-shelf technology.

In the final part of the thesis we investigate the feasibility of supporting tradi- tional QOS performance (e.g., bandwidth or delay assurances) in a network that is based on variable-range transmission control and PARO style routing protocols. Specifically, we study the impact of PARO on throughput and end-to-end delay and study its implementation for wireless ad hoc networks using IEEE 802.11 and an alternative power controlled multiple access scheme. We show the limitations of these MAC protocols under single and multihop operations. We then propose QOS enhancements to the original PARO protocol called QoS-PARO that builds new mechanisms into the original PARO system for a class of applications that wish to tradeoâetter QoS performance for sub optimal power savings.

The thesis makes a number of important contributions towards the design of large-scale energy conserving wireless ad hoc networks. While the thesis presents guidelines that govern the design of new protocols based on variable-range transmis- sion control, the practical deployment of such systems is limited today. Significant advancements are needed before these systems can be realized particularly in the area of new radio and MAC layer protocols that can exploit this variable-range transmission design principle, and hence, provide better support for the new network layer mechanisms that are proposed in this study.


Raymond R.-F. Liao: Dynamic Bandwidth Management for Internet and its Wireless Extensions

Thesis (pdf)

Over the past decade network bandwidth has become a commodity item putting pressure on Internet Service Providers (ISPs) to differentiate their service offerings to customers in order to maintain market share. However, realizing service differentiation in IP networks is a broad, multi-dimensional and challenging problem. This thesis addresses this problem and proposes new approaches for bandwidth service management for the Internet and its wireless extensions that include: (i) utilitybased adaptation mechanisms, which capture applications needs and address the technical challenges of supporting controlled service degradation in differentiated service networks; (ii) dynamic provisioning for core networks, which resolves the technical issues associated with managing complex traffic aggregates and delivering quantitative differentiated services in networks with limited state information and control mechanisms; and (iii) incentive engineering, which effectively deals with the arbitrage problem inherent in differentiated services models. We take a systems approach to these problems and investigate new policy modeling, algorithms, and protocol design techniques. We evaluate our research ideas using a combination of analysis, simulation, and results from an experimental wireless testbed.

This thesis makes a number of contributions. Our study is founded on bandwidth utility functions, which are capable of capturing the intrinsic adaptability of applications to bandwidth changes. First, we propose a unified formulation of bandwidth utility functions for application aggregates including TCP, small audio flows, and individual video flows. We discuss experiments using the online generation of utility functions from video traces and present a utility prediction algorithm that addresses the time scale mismatch that exits between video content changes and network adaptation time-scales.

Next, we present two groups of utility-based link allocation algorithms that provide a foundation for utility differentiating and utility maximizing bandwidth management. The utility maximizing algorithm leverages the piecewise linear quantization of utility functions and uses the Kuhn-Tucker condition to significantly reduce the algorithm execution time. Our utility differentiating algorithm supports utility fair allocation that allows individual utility functions to have different maximum utility values. We extend these results to the problem of multi-hop utility-based flow control by augmenting the max-min flow control algorithm to support utility functions. We study, propose and evaluate a utility-based max-min fair allocation and renegotiation protocol in the context of an edge-based wireless access network taking into consideration convergence speed, protocol state reduction, and the management of application adaptation states.

Third, we present a dynamic bandwidth provisioning model for quantitative service differentiation in core networks that comprises node and core provisioning algorithms. The node provisioning algorithm prevents transient violations of Service Level Agreements (SLAs) by predicting the onset of service level violations based on a multi-class virtual queue technique, self-adjusting per-class service weights, and packet dropping thresholds at core routers. Persistent service level violations are reported to a dynamic core provisioning algorithm, which dimensions traffic aggregates at the network ingress taking into account fairness issues not only across different traffic aggregates but also within the same aggregate whose packets can take different routes in the core IP network. We solve the problem of rate regulation for point-to-multipoint flow aggregates with the use of matrix inverse operations. We demonstrate that our model is capable of delivering capacity provisioning in an efficient manner and providing quantitative delay-bounds with differentiated loss across per-aggregate service classes.

Finally, we propose incentive engineering techniques and design two incentivebased allocation service classes that effectively constrain the strategy space of subscribers to a set of cooperative behaviors that include the truthful selection of a service class and truthful declaration of bandwidth demands. Our design minimizes protocol messaging overhead imposed on wireless subscribers while possessing a number of beneficial properties including Nash bargaining fairness for the instantaneous allocation service, and incentive compatibility for mobile users promoting the truthful declaration of their service preferences.


Michael E. Kounavis: Programming Network Architectures

Thesis (pdf)

In this thesis we address the problem of programming network architectures. We broadly define a network architecture as a distributed communication system having the following attributes: (i) network services, which the network architecture realizes as a set of distributed network algorithms and offers to the end systems, (ii) network algorithms, which include transport, signaling/control and management mechanisms, (iii) multiple time scales, which impact and influence the design of the network algorithms; and (iv) network state management, which includes the state that the network algorithms operate on (e.g., switching, routing, QOS state) to support consistent services. Programmability allows network designers to add remove, or modify network service components on-demand. By adding, removing or modifying network service components, designers can architect their networks so that they behave more optimally according to some system-wide performance objective. Architecting networks can be accomplished in many different ways such as by programming the service disciplines that are supported by the intermediate nodes of networks or by programming the routing, signaling and flow control algorithms that affect the services offered to the end-systems.

Programming network architectures is a challenging problem. The difficulty stems from the fact that it is hard to define a unifying programmable networking model and a set of programming interfaces that encompass services as diverse as routing, signaling, and access control/forwarding. Another challenging issue is related to the computational efficiency and performance of programmable network architectures. Programmable networks require more computational resources than existing networks in order to support the introduction of new services in software. In addition, todays router systems are generally configured with only a small amount of memory with limited access bandwidth. Hence, a key challenge is to design programming systems and network algorithms that can operate efficiently under stringent space-time constraints.

This thesis makes a number of contributions. First, a programmable networking model that provides a common framework for understanding the state-of-the-art in programmable networks is presented. A number of projects are reviewed and discussed against a set of programmable network characteristics. We present a simple qualitative comparison of the surveyed work and make a number of observations about the direction of the field.

Next, we present the design, implementation and evaluation of a programming system that automates a life cycle process for the creation, deployment, management, and architecting of network architectures. We discuss our experiences in building a spawning network testbed that is capable of creating distinct network architectures on-demand. Network architectures are created as programmable virtual networks. Our programming system is based on a methodology that allows a child network to operate on top of a subset of its parents network resources and in isolation from other spawned virtual networks. We show through experimentation how a number of diverse network architectures can be spawned and architecturally refined.

Third, we discuss how we can support end-system connectivity with programmable network architectures. We focus on wireless network architectures, where the connectivity problem is more challenging due to host mobility. We describe a reflective handoff service that allows access networks to dynamically inject signaling systems into mobile devices before handoff. Thus, mobile devices can seamlessly roam between wireless access networks that support different mobility management systems. We also show how a multi-handoff access network service can be introduced that simultaneously supports different styles of handoff control over the same wireless access network. This programmable approach can benefit service providers who need to be able to satisfy the mobility management needs of a range of mobile devices from cellular phones to PDAs and wireless laptop computers.

Next, we study the performance of network programming systems. In particular, we focus on the problem of efficiently programming the data path. Programming the data path is challenging because data path algorithms operate on the fastest time scale and are associated with small time budgets. We focus on the implementation of a network programming system in network processor-based routers. Network processors comprise multiple processing units for parallel packet processing and constitute suitable building blocks for software-based routers. We propose the design of a binding tool called NetBind that balances the flexibility of network programmability against the need to process and forward packets at line speeds. To support dynamic binding of components with minimum addition of instructions in the critical path, NetBind modifies the machine language code of components at run time. To support fast data path composition, NetBind reduces the number of binding operations required for constructing data paths to a minimum set so that binding latencies are comparable to packet forwarding times.

Finally, we study the realization of performance critical algorithms in programmable networks. Performance critical algorithms include packet classification, packet forwarding and traffic management. While packet forwarding and traffic management problems have been well studied in the literature there are still a number of open issues associated with packet classification. We conjecture that the design of classification algorithms will need to exploit the structure and characteristics of packet classification rules. We study the properties of several classification data bases and, based on these findings, we suggest a classification architecture that can be implemented efficiently in programmable networks.


Andras Veres: Modeling TCP Dynamics and Engineering Service Differentiation in TCP/IP Networks

Thesis (pdf)

(Thesis deposited without abstract.)


Petar Momcilovic: Statistical resource sharing in the presence of heavy tails

Thesis (pdf)

Self-similar/heavy-tailed phenomena are observed in a large number of natural and man-made systems ranging from earthquakes and genetics to telephone networks and UNIX operating systems. Statistical sharing of bottleneck resources, e.g., bandwidth, processing power and storage, is a common way of increasing the operating efficiency of network and service systems. While the benefits of sharing, i.e. multiplexing, are well understood for exponential traffic models, the implications of long-range dependence are just now being discovered. This thesis focuses on characterizing these implications.

First, we study the classical model of a network switching element, a finite buffer single server queue fed by On-Off traffic sources. The primary performance measures of this model are the loss rate and buffer overflow probability. These quantities indicate the quality of service provided by the system. For the case of heavy-tailed On-periods, explicit asymptotic formulas for the loss rate and buffer overflow probability are derived. The results provide important insight into qualitative tradeoffs between the performance measures and system parameters, the key element for proper dimensioning of the system. Furthermore, we quantify the benefits of buffer sharing and scheduling in the same model.

Second, we examine resource sharing with feedback control, such as TCP, the predominant transport protocol in the Internet. The processor sharing queue represents a baseline model of ideal flow control, i.e., when a number of users share bandwidth, each user receives an equal share. It is shown that job (file) transmission times admit an easy asymptotic characterization depending on whether the job size has a heavier or lighter tail than the Weibull distribution e-sqrt(x). In other words, this fundamental model exhibits a phase transition at e-sqrt(x). Furthermore, the newly developed large deviations approach also provides a mathematical framework for proving related results.


Ana Radovanovic: Nearly Optimal Cache Replacement Policies for Efficient Web Access

Thesis (pdf)

Chieh-Yih Wan: A Resilient Transport System for Wireless Sensor Networks

Thesis (pdf)

This thesis contributes toward the design of a new resilient transport system for wireless sensor networks. Sensor networks have recently emerged as a vital new area in networking research, one that tightly marries sensing, computing, and wireless communications for the first time. Wireless sensors are embedded in the real world and interact closely with the physical environment in which they reside. These networks must be designed to ectively deal with the network's dynamically changing resources, including available energy, bandwidth, processing power, node density, and connectivity. This dissertation focuses on making the sensor network transport system resilient to such changes - in many cases abrupt changes. We define transport resilience as the ability of the network to deliver a sufficient amount of sensing events to meet the applications' fidelity requirement for a set of different traffic classes while reducing the energy consumption of the network. More specifi- cally, we investigate, study, and analyze two classes of transport resilience: (1) the need to reliably deliver data under various error conditions; and (2) the need to maintain the application's fidelity under congested network conditions. We take an experimental systems research approach to the problem of supporting resilience in sensor networks by building an experimental sensor network testbed and evaluating a set of new resilient transport algorithms under various workloads and changing network conditions. We study the behavior of these algorithms under testbed conditions, and apply what is learned toward the construction of larger and more scalable resilient networks.

This thesis makes a number of contributions. First, we propose a new reliable delivery transport paradigm for sensor networks called Pump Slowly Fetch Quickly (PSFQ). PSFQ represents a lightweight, scalable and robust transport protocol that is customizable to meet a wide variety of applications needs (e.g., re-programming, actuation, reliable event delivery). We present the design and implementation of PSFQ, and evaluate the protocol using the ns-2 simulator and an experimental wireless sensor testbed based on Berkeley motes and the TinyOS operating system. The PSFQ protocol represents the first reliable transport proposed for wireless sensors networks.

Next, we present the design of an energy-efficient congestion control scheme for sensor networks called CODA (COngestion Detection and Avoidance). We define a new objective function for traffic control in sensor networks, which maximizes the operational lifetime of the network while delivering acceptable data fidelity to sensor network applications. CODA is founded on three important distributed control mechanisms: (1) an accurate and energy-efficient congestion detection scheme; (2) a hop-by-hop backpressure algorithm; and (3) a sink to multi-source regulation scheme. We evaluate a number of congestion scenarios and define new performance metrics that capture the impact of CODA on the sensing application performance. We analyze the performance benefits and practical engineering challenges of implementing CODA in an experimental sensor network motes testbed. CODA represents the first comprehensive solution to the congestion problem in sensor networks.

The final contribution of this dissertation explores a complementary solution to CODA called dual radio virtual sinks that boosts the performance of sensor networks even under persistent overload conditions. We propose to randomly distribute a small number of all-wireless dual radio virtual sinks throughout the sensor field. In essence, these virtual sinks operate as safety valves in the sensor field by selectively siphoning overload traffic in order to maintain the fidelity of the application signal delivered to the network's physical sink. A key feature of virtual sinks is that they are equipped with a secondary higher bandwidth, long-range radio (e.g., the IEEE 802.11), in addition to their primary low bandwidth, low power mote radio. Virtual sinks are capable of dynamically forming a secondary ad hoc radio network that can be used on-demand by the mote radio network. Rather than rate controlling packets during periods of congestion (as is the case with CODA), virtual sinks take the congested traffic off the low-powered sensor network and move it on to the secondary radio network, transiting it to the network's physical sink. We study, propose, and evaluate a set of algorithms for virtual sink discovery, selection, traffic transiting and load balancing. We leverage the use of the Stargate platform to support an all-wireless virtual sink approach in our sensor network testbed.

We believe that sensor networks must be built to be robust to various software and hardware failures, and be resilient to dynamic resource changes such as node failures, increased packet error rates, and traffic surges. Collectively, PSFQ, CODA, and virtual sinks provide a set of energy-efficient, robust transport mechanisms that serve as a foundation for making sensor networks more resilient.


Daniel Villela: Resource Management in Large–Scale Services: Models and Algorithms

Thesis (pdf)

Many of the services in today's Internet (e.g. file transfer, streaming media, auctions) are offered by companies that either own or lease a fixed, static set of server resources to host these services. Under such a structure, a company can provision their serving resources for its typical loads or to conservatively handle peak demands. Either way has drawbacks. In the former, poor performance is expected when peaks of demand occur. In the latter, costly resources remain idle most of the time.

This thesis explores provisioning alternatives that use pools of servers in novel ways that permit provisioning for typical loads but can still accommodate peak demands. Three specific problems are explored by constructing mathematical models and analyzing these models.

First, we explore benefits for independent companies from pooling together of their own servers to jointly serve data delivery. This data may have real-time requirements (e.g., multimedia), or it may be elastic (e.g., file transfer). We show that such pooling can simultaneously benefit all companies participating in this arrangement, but that it is necessary for these companies to have similar demands and to offer similar quantities of serving resources.

Second, we explore how to reduce transaction times for transaction-oriented services, such as an auction website. If such a service is replicated across multiple servers, these servers must be kept consistent, and replicating the service across many servers will do more harm than good, as the cost to maintain consistency will offset the gains made from distributing the load. We explore the performance of heuristic-based algorithms that identify provisionings that are close to optimal.

Last, we consider a general provisioning problem for a business that owns a server farm, and must decide how to allocate its fixed pool of servers among a set of independent companies that offer transaction-oriented services. The optimal allocation is a function of the loads imposed by each company, the profit each company offers for a successful transaction, and the cost that the server farm must pay for each failed transaction. We derive three approximation methods for partitioning the fixed set of servers among the various companies.


Bong-Jun Ko: Distributed, self-organizing replica placement in large scale networks

Thesis (pdf)

Emerging large scale distributed networking systems, such as P2P file sharing systems, sensor networks, and ad hoc wireless networks, require replication of content, functionality, or configuration to enact or optimize communication tasks. The placement of these replicated resources can significantly impact performance. In this thesis, we present self-organizing, fully distributed, asynchronous, scalable algorithms and protocols that can be used to place replicated resources in desirable locations in three different environments.

We first explore theoretical properties of replica placement problems at a fundamental level, and develop a fully distributed protocol in the context of a graph with colored nodes, where a node's color indicates the replica/task that it is assigned. In this theoretical work, we introduce a novel mechanism with which each node in the network decides its own color, resulting in a graph coloring such that each node is close to any arbitrary replicated color. The combination of theoretical results and simulation show that the protocol stabilizes quickly to a coloring that is close to the optimal under a set of optimization criteria.

We then study the distributed server placement problem in large-scale client-server networks, such as content distribution networks (CDN) and online gaming networks, as a real-world application of our replica placement algorithm. The problem is explored in the context of a server coloring problem, where each server node is assigned a color that represents a particular server resource. Simulation results show that the distributed algorithm quickly generates a placement for multiple colors simultaneously, and reduces the expected client-server distance by almost half compared to the distance when the assignment of replicated servers is done at random.

Last, we design, implement, and evaluate a distributed channel assignment algorithm in multi-hop 802.11 mesh networks, taking a further step toward validating the effectiveness of our replica placement mechanism. While we take the same strategy in addressing the distributed assignment problem in a self-organizing manner, we also make a significant number of new contributions that deal with the unique challenges present in the wireless networking environment, from algorithmic modifications to resolving implementation-specific issues. Experimental results obtained from the testbed show that our channel assignment algorithm improves the network's capacity by 50% in comparison to a homogeneous channel assignment and by 20% in comparison to a random assignment.


Rita Wouhaybi: Algorithms for Reliable Peer-to-Peer Networks

Thesis (pdf)

Over the past several years, peer-to-peer systems have generated many headlines across several application domains. The increased popularity of these systems has led researchers to study their overall performance and their impact on the underlying Internet. The unanticipated growth in popularity of peer-to-peer systems has raised a number of significant problems. For example, network degradation can be observed as well as loss of connectivity between nodes in some cases, making the overlay application unusable. As a result many peer-to-peer systems can not offer sufficient reliability in support of their applications. This thesis addresses the problem of the lack of reliability in peer-to-peer networks, and proposes a number of algorithms that can provide reliability guarantees to peer-to-peer applications. Note that reliability in a peer-to-peer networking context is different from TCP type reliability. We define a reliable peer-to-peer as a network that is resilient to changes such as network dynamics, and can offer participating peers increased performance when possible. We make the following contributions to area of peer-to-peer reliability:

Our research methodology encompasses simulations and analytical analysis to understand the behavior and properties of the proposed systems, and substantial experimentation, as practical proof of concept of our ideas, using the PlanetLab platform. The common overarching theme of the thesis is the design of new resilient network algorithms capable of offering high-performance to peers and their applications.

As more and more applications rely on underlying peer-to-peer topologies, the need for efficient and resilient infrastructure has become more pressing. A number of important classes of topologies have emerged over the last several years, all of which have various strengths and weaknesses. For example, the popular structured peer-to-peer topologies based on Distributed Hash Tables (DHTs) offer applications assured performance, but are not resilient to attacks and major disruptions that are likely in the overlay. In contrast, unstructured topologies where nodes create random connections among themselves on-the-fly, are resilient to attacks but can not offer performance assurances because they often create overlays with large diameters, making some nodes practically unreachable. In our first contribution, we propose Phenix, an algorithm for building resilient low-diameter peer-to-peer topologies that can resist different types of organized and targeted malicious behavior. Phenix leverages the strengths of these existing approaches without inheriting their weaknesses and is capable of building topologies of nodes that follow a power-law while being fully distributed requiring no central server, thus, eliminating the possibility of a single point of failure in the system. We present the design and evaluation of the algorithm and show through extensive analysis, simulation, and experimental results obtained from an implementation on the PlanetLab testbed that Phenix is robust to network dynamics such as bootstrapping mechanisms, joins/leaves, node failure and large-scale network attacks, while maintaining low overhead when implemented in an experimental network.

A number of existing peer-to-peer systems such as Kazaa, Limewire and Overnet incorporate parallel downloads of files into their system design to improve the client’s download performance and to offer better resilience to the sudden departure or failure of server nodes in the network. Under such a regime, a requested object is divided into chunks and downloaded in parallel to the client using multiple serving nodes. The implementation of parallel downloads in existing systems is, however, limited and non-adaptive to system dynamics (e.g., bandwidth bottlenecks, server load), resulting in far from optimal download performance and higher signaling cost. In order to capture the selfish and competitive nature of peer nodes, we formulate the utilities of serving and client nodes, and show that selfish users in such a system have incentives to cheat, impacting the overall performance of nodes participating in the overlay. To address this challenge, we design a set of strategies that drive client and server nodes into situations where they have to be truthful when declaring their system resource needs. We propose a Minimum- Signaling Maximum-Throughput (MSMT) Bayesian algorithm that strives to increase the observed throughput for a client node, while maintaining a low number of signaling messages. We evaluate the behavior of two variants of the base MSMT algorithm (called the Simple and General MSMT algorithms) under different network conditions and discuss the effects of the proposed strategies using simulations, as well as experiments from an implementation of the system on a medium-scale parallel download PlanetLab overlay. Our results show that our strategies and algorithms offer robust and improved throughput for downloading clients while benefiting from a real network implementation that significantly reduces the signaling overhead in comparison to existing parallel download-based peer-to-peer systems.

Network architects and operators have used the knowledge about various network metrics such as latency, hop count, loss and bandwidth both for managing their networks and improving the performance of basic data delivery over the Internet. Overlay networks, grid networks, and p2p applications can also exploit similar knowledge to significantly boost performance. However, the size of the Internet makes that task of measuring these metrics immense, both in terms of infrastructure requirements as well as measurement traffic. Inference and estimation of network metrics based on partial measurements is a more scalable approach. In our third contribution, we propose a learning approach for scalable profiling and prediction of inter-node properties. Partial measurements are used to create signature-like profiles for the participating nodes. These signatures are then used as input to a trained Bayesian network module to estimate the different network properties. As a first instantiation of these learning based techniques, we have designed a system for inferring the number of hops and latency among nodes. Nodes measure their performance metrics to known landmarks. Using the obtained results, nodes proceed to create anonymous signature-like profiles. These profiles are then used by a Bayesian network estimator in order to provide nodes with estimates of the proximity metrics to other nodes in the network. We present our proposed system and performance results using real network measurements obtained from the PlanetLab platform. We also study the sensitivity of the system to different parameters including training sets, measurement overhead, and size of the network. Though the focus is on proximity metrics, our approach is general enough to be applied to infer other metrics of interest, potentially benefiting a wide range of applications.


Xiaowei Zhang:

Thesis (pdf)

Seoung-Bum Lee: Adaptive Quality of Service for Wireless Ad hoc Networks

Thesis (pdf)

This thesis contributes toward the design of a new adaptive quality of service (QOS) paradigm for wireless ad hoc networks. We address some of the key performance problems in the broader realm of wireless ad hoc networks, including mobile ad hoc networks and emerging wireless ad hoc sensor networks.

Wireless ad hoc networks represent autonomous distributed systems that are infrastructureless, fully distributed, and multi-hop in nature. Over the last several years, wireless ad hoc networks have attracted considerable research attention in the general networking and performance community. This has been fueled by recent technological advances in the development of multifunctional and low-cost wireless communication devices. Wireless ad hoc networks have diverse applications spanning several domains, including military, commercial, medical, and home networks. The results of all this research activity the wireless ad hoc networks are starting to move from the research domain into the real world and are being gradually integrated into our daily lives. Projections indicate that this will accelerate later in the decade, to the point where some analysts predict that these types of self-organizing wireless devices will eventually become the dominant form of communications infrastructure.

To cope with the unpredictable nature of this highly dynamic environment, wireless ad hoc networks need to be able to adapt to changes in resource availability (i.e., energy, bandwidth, processing power, network density, and topology changes) and overcome any unanticipated networking problems while satisfying a wide range of application requirements. Meeting these requirements in such an environment is very challenging because the performance observed by users, devices, and routing paths selected through the network will continuously change in response to the time-varying network dynamics.

This thesis addresses some of the key issues needed to meet the requirements in support of the adaptive QOS for wireless ad hoc networks. They include (1) an adaptive QOS framework and signaling protocol for mobile ad hoc networks, (2) congestion mitigation in mobile ad hoc networks, and (3) a cost-efficient agile routing mechanism for wireless ad hoc sensor networks.

In the contribution of this thesis, we study the technical challenges for QOS support in mobile ad hoc networks and propose the INSIGNIA QOS framework that is designed to support the adaptive service paradigm. The key component of the QOS framework is the INSIGNIA signaling system, an in-band signaling system specifically designed to address the adaptive QOS related challenges in mobile ad hoc networks. The INSIGNIA signaling system is recognized as one of the first QOS signaling protocols in mobile ad hoc networks. We also present a detailed performance evaluation of the IEEE 802.11 based INSIGNIA signaling system with a number of well-known MANET routing protocols. The INSIGNIA system shows operational transparency to a number of MANET routing protocols and offers significant performance gains for various TCP and UDP flows.

Next, we investigate the MANET-specific congestion conditions called hotspots. A hotspot is defined as a node (or a group of nodes) experiencing flash congestion conditions or a period of excessive contention conditions in wireless ad hoc networks. Hotspots can exist even in lightly loaded ad hoc networks and can severely degrade the network performance. The existence of a hotspot is largely due to mobility in mobile ad hoc networks and related traffic patterns where the node mobility continuously changes the network topology and causes the on-going traffic to reroute. This effect varies the network loading conditions and produces transient congestion. These hotspots cause packet loss, increase in end-to-end delay, and even trigger route maintenance as they are often misinterpreted as routing failures. As a solution to this problem, we propose a Hotspot Mitigation Protocol (HMP) that works with best effort routing protocols. The HMP suppresses and disperses new/rerouted flows from hotspot regions to mitigate congestion conditions. HMP also provides a traffic throttling scheme that rate controls best effort TCP flows to relieve congestion.

In the final contribution of the thesis, we shift our research focus to wireless ad hoc sensor networks, a new emerging frontier in wireless ad hoc networks. Based on the observation that current routing algorithms for sensor networks yield poor information delivery (i.e., poor fidelity as measured at the Internet gateway to the sensor network – typically called a sink), we investigate the problem and the solution space using the TinyOS embed operating system in an experimental testbed of Mica2 mote sensors. We show that the poor fidelity is largely due to the unresponsive nature of the route selection convention commonly in use in sensor networks. To resolve this problem, we propose an agile, cost effective, and high-fidelity yielding hop-by-hop routing protocol called Solicitation-based Forwarding (SOFA). SOFA achieves fast path convergence at network deployment time and acquires an alternative path quickly with minimal signaling overhead when faced with path changing conditions due to network dynamics. Path maintenance in SOFA is minimal and when a new sensor is added to the network, it is integrated quickly and seamlessly. SOFA shows significant reduction in energy consumption where the energy savings in SOFA network are primarily due to decrease in the signaling overhead. The on-demand nature makes SOFA cost effective; its agile self-adapting nature makes it resilient to network vagaries; and its use of timely solicitation-based handshakes make its forwarding decisions effective in data delivery.


Boonsit (Teddy) Yimwadsana: Molecular self-assembly

Thesis (pdf)

The parallelism of molecular computation has made algorithmic self assembly one of the most exciting fields in the past decade. Chemical kinetics contributes to our understanding of self assembly through the models provided by systems of reaction rate equations in which the variables are time-dependent concentrations of the various self assembled structures. However, these systems of ordinary differential equations are usually non-linear and it is very difficult to find explicit solutions for them. In the case where there are a large number of distinct structures involved, a large number of equations are needed, and even numerical methods have limited usefulness. On the other hand, we have been able to solve for the final yield (the concentration of the desired self-assembled structures), once the system is in its absorbing state. This thesis provides novel methods for obtaining explicit formulas for the yield of simple systems [13, 11]. This analysis of yields led us to the discovery of previously unknown phase transition phenomena where 100% yields are obtained for certain subsets of the reaction rate parameters.

To acquire insights into more complex systems, we use numerical methods and simulations. In the latter case, we formulate discrete-event simulations from elementary collision theory which leads to a Markovian birth-and-death process that models self assembly events (see [17]). The novelty of our approach lies in its simplicity and in our proof that, in the hydrodynamic limit, the birth and death equations become reaction-rate equations of chemical kinetics.

These contributions form Part I of the thesis. Part II shifts to the algorithmic self assembly of specific molecules, viz., those constructed from DNA. Computation becomes a crystal-growth process in two dimensions; it is modeled formally by tiling systems whereby distinct tiles represent the various types of molecules; each tile has edge encodings that determine the types of molecules that can bond with the given tile during crystal growth, and which are determined by the computation desired of the tiling. Here, our interest is no longer in yields, but rather the time required to complete the computation implicit in crystal growth. Our key contribution is a mapping of this two-dimensional tiling process into a one dimensional particle process, and our discovery that it is precisely the well-known 'totally asymmetric simple exclusion process' (TASEP) widely studied by those in applied mathematics and the physical sciences. Once we discovered that event in the TASEP which was equivalent to the time to self assemble (i.e., tile) a rectangular structure, we could apply well known results from the TASEP literature. (See our work in [12, 11] which focuses on the asymptotics of hydrodynamic limits.)

These tiling results are based on idealized crystal-growth initial conditions. In the laboratory, these initial conditions are in fact part of the growth process, an observation that complicates the process substantially. Our entirely new analysis (see [15, 16]) of this generalized tiling process leads to explicit formulas for self-assembly times, once again in the hydrodynamic limit.

The low-energy, vast parallelism of molecular computation is perhaps its chief advantage over alternative paradigms. It is obvious interest therefore to quantify this parallelism by means of an analysis that yields explicit formulas. Just such formulas describing parallelism in the tiling model are among the original contributions of this thesis.

Errors in self assembly can and do occur in practice and to an extent that creates a serious design problem that must be taken into account. We propose and analyze a novel temperature pulsing method (see [14]). The method is based on a roll-back process analogous to the well-known checkpointing method of error tolerant computing systems. By periodic temperature pulses the weakly bonded erroneous regions created during crystal growth can be detached from properly self assembled regions of the crystal. Again, by a key observation related to the well-known Hammersley process, it is shown that the expected time to self assemble square tilings is proportional to the expected length of the longest increasing subsequences of random permutations of a given set of integers.


Jing Feng: Analytical and experimental studies of TCP variants

Thesis (pdf)

In recent years, there is significant progress made in modelling and analyzing the Internet congestion control algorithms. In this dissertation, we study and analyze the AIMD congestion control algorithm. Based on our theoretical result, we further proposed a different class of algorithms, including 2Mark and FreeDrop TCP.

We provide a mathematical analysis of AIMD in the hydrodynamic limit, the theoretical underpinning of this thesis. First, we review the generalized resource allocation problem based on AIMD, where limited resources, i.e., bandwidth and capacity, are shared by N end users. Without further assumptions, we derive the probability distribution of the throughput/rate of each individual user. Then we study how each system parameter affects the throughput and fairness of this system under AIMD. We extend our analysis further to the scenario in which there are non-uniform users and varying grades of service. We also propose a new service differentiation algorithm 2Mark based on our theoretical findings.

We present both our simulation and theoretical studies of the proposed service-differentiation algorithm 2Mark. We conduct a performance analysis of 2Mark. Numerous simulation results and comparisons to other well-known protocols (e.g., TCP SACK and HS-TCP) are discussed.

A new TCP congestion control algorithm, FreeDrop TCP, motivated by the 2Mark algorithm is proposed. Compared to TCP, the objective of FreeDrop TCP is to provide an efficient and aggressive congestion control in High-Speed-Long-Delay networks. This is accomplished by implementing a new class of TCP response functions, [Special characters omitted.] . We study the throughput under FreeDrop TCP analytically, and we present various simulation results comparing this protocol to other existing TCP variants which target the high speed network environment.


Daejoong Kim: Grid-based geographic routing for mobile ad-hoc networks

Thesis (pdf)

This thesis investigates geographic routing in mobile ad hoc networks. In geographic routing, each node knows the position of one-hop neighbors, and packets are forwarded to a neighbor that is closer to the destination. This simple forwarding strategy fails when there are obstacles such as voids or physical obstructions that prevent radio communications between nodes. Then, a recovery procedure is used to get around the obstacle. Earlier recovery techniques require that nodes know the exact position of other nodes. However, inexpensive position devices do not provide sufficient accuracy for these algorithms and it is diffcult to obtain the information on moving nodes.

Our approach to geographic routing begins with an observation that, if nodes know the shape and position of obstacles, they can make better routing decisions so that packets can avoid the obstacles in advance. A challenge is that obstacles in networks have complex shapes and change as edge nodes move. Our engineering solution is to partition the network into a grid and represent the obstacles on the grid map. The grid approximation represents obstacles by grid edges that are convenient to use and rarely affected by small movements of nodes. This thesis presents geographic shortest path routing that uses the obstacle map to follow the geographic shortest path.

In addition, the grid partitioning of a network naturally aggregates nodes into non-overlapping grid cells. The mobile network can be modeled as a fixed pattern network of occupied grid cells. The grid cells do not move but may change their state between "on" and "off" according to their occupancy. The occupancy of a grid cell depends not on the mobility of nodes but the density of nodes. As node density increases, grid cells become more stable. Based on this idea, we design a recovery algorithm that is robust against mobility and location errors. The novel recovery algorithm eliminates the need for the planarization of networks, which earlier recovery techniques have used, by taking advantage of the grid structure. The thesis also presents simulation results that evaluate grid-based routing and compares it with conventional position-based routing.


Andreas Constantinides: Seamless downloading in synchronous cache networks

Thesis (pdf)

A file F is sited at the root of a tree network of nodes, each node able to cache part or all of F. Each node may also have a client wanting to download F. Links between neighbors in the tree have given transmission delays, but there is no transmission delay from a client to the node where it is located. The paradigm of seamless downloading requires that requesting nodes, i.e., nodes with clients, start receiving F immediately, receive it continuously until F is fully assembled, and be implemented by means of a protocol whereby nodes are unaware of network structure beyond links to immediate neighbors. Because of similar processes known as self-assembly in biology and molecular science, it is natural to adopt the term seamless self-assembly for present purposes.

We formulate a discrete-time, synchronous-network model, in which clients send requests upstream toward the root, and in response, file units are sent downstream. We assign segments of F to the node caches for several problem variants so that seamless self-assembly is realized and, simultaneously, the total cache size achieves a lower bound determined solely by the network link delays. In the CLIENT problem, an arbitrary but given subset of nodes (clients) request a file F simultaneously at time 0. The subsets of F cached at the nodes must be such that: (1) seamless downloading occurs at each client, and (2) the total cache size is minimized. Removing the subset of clients from an instance of CLIENT gives the ANY-CLIENT problem for which the cache assignments must be such that, no matter which nodes act as clients, seamless downloading is achieved at each, and solutions must incur minimal storage cost. In solutions to the CLIENT and ANY-CLIENT problems, clients are prepared to re-sequence segments of a file received out of order. Thus, if the file is a video, the requesting node may experience an initial delay during which some portion of the video is buffered, before streaming in the usual sequential sense can begin. Our solution to the following problem on a path then becomes of interest: From the set of solutions to the CLIENT problem, find one that minimizes the maximum delay until streaming, in the usual sequential sense, can begin at each client. Finally, allowing requests and in response file-unit transfers to occur in both directions complicates the basic problems. We analyze the bidirectional CLIENT problem on a path and assess the benefits derived from the bidirectional flow of units. Extensions of the seamless downloading problem in several dimensions are discussed as directions for further research.


Raj Kumar Rajendran: Strong Detection of Network Protocol Errors

Thesis (pdf)


Yee Him (Patrick) Cheung: Telescopic Data Compression with Feedback Data Enhancement for Dense Sensor Networks

Thesis (pdf)


Kyung Joon Kwak: Minimal Sensing Complexity & Distributed Algorithms in Sensor Networks

Thesis (pdf)


Shane Eisenman: People-Centric Mobile Sensing Networks

Thesis (pdf)

This thesis contributes a new system in support of large scale people-centric sensing applications. Over the last decade, wireless sensor networking has developed into arguably the most active area in networking research. The state of the art largely follows an application-specific philosophy, where modest numbers of static wirelessly-connected sensor nodes are placed in the target environment in support of a single application. In a properly engineered network, sensor nodes are well-equipped and well-positioned to best provide the connectivity and sensing required by the application. Such networks are illsuited, however, to the demands of a new class of applications focused on providing sensor information about people, their daily lives, and their environments. These people-centric applications require the ability to both sample very detailed information on the individual scale, and to provide a view of the urban landscape - a very large scale challenge. A new approach is required.

Therefore, we propose the novel MetroSense architecture in support of people-centric sensing. While incorporating static infrastructure elements, to get large scale sensing coverage the architecture primarily makes use of devices with embedded sensors, such as mobile phones, that people already carry daily. The architecture adopts an opportunistic paradigm where interactions between mobile sensors and static infrastructure elements occur as allowed by the uncontrolled mobility of people. The power of this architectural choice is that it allows applications to marshal very large numbers of mobile sensors without deploying an extensive static grid dedicated to a particular task. People-centric sensing facilitates statistical spatial and temporal sampling of the field, building up data over time for applications that require this wide area but still detailed view. More importantly, since the sensors are human-carried, they are always where they need to be (geographically) to sample people and their environments.

The people-centric architectural approach is not without its challenges, however. Reliance on a people-powered mobile architecture means that sensing device characteristics will be heterogeneous - different sensor types (e.g., camera, microphone, accelerometer) will be embedded in different devices and these devices will have different storage, processing and communication capabilities. Additionally, sensors will be carried in a manner most convenient to the human (e.g., in a pocket or purse) and not necessarily in a manner most conducive to high fidelity data gathering for an application. Finally, due to the uncontrolled mobility of people, rendezvous among the sensors and the static infrastructure elements may not happen on the time scales best suited to the needs of applications. Together, these challenges embody what we call the sensor availability problem, brought on by resource heterogeneity and reachability limitations; and the sensor context problem, caused by mismatches between the requirements of application queries (e.g., location, body position, angle/orientation, etc.) and actual context of the mobile sensors to which these queries are assigned.

To address these two problems, in this thesis we present Quintet and Halo. Quintet aims to address the sensor availability problem by allowing for the discovery and sharing of sensing resources between devices that rendezvous in situ. Thus, a device assigned a query requiring samples from a sensor it does not have can borrow samples from a nearby device over short-range radio. Such sharing can also take place if the queried node.s context does not match the sampling context required by the query. Among its contributions, Quintet provides a language for context specification, a method of context analysis and comparison between neighbors, a protocol managing the transfer of sensor data from the sharing devices to the consuming devices, and perhaps most importantly a mechanism for learning the context specification that is most likely to yield higher fidelity data for a particular scenario. Halo addresses the sensor availability problem by providing a mechanism to increase the probability of rendezvous between mobile sensors and static infrastructure elements in an effort to improve the perceived responsiveness of the system to "delay-aware" applications. Halo provides a tunable knob to balance responsiveness in terms of average delay in answering a query on the one hand, and consumption of system resources such as mobile sensor energy and the radio channel on the other hand.

Finally, we present BikeNet, a mobile sensing system for mapping the cyclist experience. Built as an instantiation of the MetroSense architecture to provide insight into the real-world challenges of people-centric sensing, BikeNet uses a number of sensors embedded into a cyclist.s bicycle to gather quantitative data about the cyclist.s rides. BikeNet uses a dual-mode operation for data collection, using opportunistically encountered wireless access points in a delay tolerant fashion by default, and leveraging the cellular data channel of the cyclist.s mobile phone for real-time communication as required. BikeNet also provides a web-based portal for each cyclist to access various representations of her data, and to allow for the sharing of cycling related data (for example, favorite cycling routes) within cycling interest groups, and data of more general interest (for example, pollution data) with the broader community.

To support people-centric applications in the wireless sensor network domain, a new architectural approach is required. The MetroSense architecture allows for urban scale deployment at a relatively low cost compared to existing alternatives using an opportunistic approach. Together, Quintet and Halo provide a set of mechanisms that improve the responsiveness of the network and the quality of data sampled on behalf of applications, serving as a solid foundation for future people-centric sensing networks research.


Xiaozhu Kang: Caching and Scheduling for Efficient Content Distribution

Thesis (pdf)


Gahng-Seop Ahn: Information-driven Tracking and Access Control in Wireless Sensor Networks

Thesis (pdf)


Jian Tan: New Origins of Heavy Tails with Applications to Information Networks

Thesis (pdf)