Performance Enhancing Proxy PEP for optimizing the overall bandwidth usage of the system with functionalities that include data caching e. Network Mobility for ensuring seamless heterogeneous handovers by setting up multiple tunnels, link failure prediction etc. The design and implementation suggestions for all modules within each of those functional categories are elaborated in Sections 4, 5, and 6, respectively.
Within the TWCS, we aim at delivering an optimized connected experience by prioritizing important traffic, enforcing SLA levels, respecting traffic characteristics e. These functions are referred to as the QoS aspect of this system. Firstly, we describe the classification of the services in Section 4. Then, we discuss the data and control modules, concerning QoS provision in Sections 4. Next, implementation challenges are tackled in Section 4.
Note that bandwidth demand is thus not included, as bandwidth shortage can be translated into these parameters. DiffServ is a set of enhancements to the Internet protocol to enable QoS between hosts in different networks. Traffic is classified into a limited set of service classes, which are treated differently. This allows for greater scalability than per flow end-to-end QoS, as used in IntServ for example. We therefore identified the characteristics of the T2W services in Section 2. Next to the network characteristics, a second aspect to consider is the relative priority of the different T2W services.
This is given in Table 3 , which was determined jointly with partners in the railway industry [ 37 ]. Note that there is no one-to-one mapping of service classes and priorities. The SLA can e. A traffic flow is a portion of traffic, delimited by a start and stop time, that is originated from a particular IPv6 source address with a particular transport port number and destined for a particular IPv6 destination address with a particular transport port number [ 38 ]. The combination of source address and a non-zero Flow Label 20 bits value in the IPv6 header see Figure 5 uniquely defines a traffic flow.
If the Flow Label field has not been set by the source node, the Marker has to determine the traffic flow each packet belongs to. The Marker therefore inspects a n-tuple of parameters in the packet header, typically including IP source address, IP destination address, source port number, destination port number and protocol identification. As it inspects multiple fields, the Marker is considered as a 'a multi-field classifier' [ 36 ]. The Marker then assigns the same Flow Label value to all packets that belong to the same traffic flow although this field should normally only be set by the source node.
A Flow Label is set by means of a pseudo random generator, so chances that incoming traffic flows have the same Flow Label should be very small [ 25 ]. The assignment of a Flow Label to an n -tuple expires when termination messages e. This timer can e. This value indicates into what service class the packets are classified and what priority they have.
For local use, the 'xxxx11' bit pattern can be used, which allows for 16 different service classes within the TWCS. When merging Tables 2 and 3 , we propose to use the DSCP values for the T2W services as stated in Table 4 to indicate both the service class and the priority. The actual rules that determine how to identify a traffic flow as belonging to a certain T2W service and thus to determine the service class and priority , will need to be stated in the QoS Config module and are used by the Marker while operating.
By labeling each packet with the Flow label, service class and priority, the Marker's decisions are passed via the data plane to all subsequent modules which need to make decisions based on those parameters. The described functionality can be implemented with e. The SLA Enforcer will:. As stated in Section 4. This could be the case for SLAs for which the aggregate traffic flows have e. For a certain amount of time, e. Furthermore, the SLA Enforcer will drop all traffic flows that belong to a service class for which the requirements cannot be met signaled by the Admission Control, see Section 4.
If the aggregate capacity of the available wireless links is smaller than the sum of the total load that needs to be sent from train to wayside or vice versa , the Shaper will shape the different traffic flows by adapting their aggregate data rate to match the available capacity, based upon the link occupation that is signaled per service class by the Scheduler see Section 4. The Scheduler also signals to the Shaper what traffic flows are mapped on what links, so the Shaper will know which traffic flow rates to adapt in order to avoid queue overflows in the Scheduler. This mechanism, which causes a transmitting device to back off from sending data packets until the bottleneck has been eliminated is sometimes referred to as 'backpressure'.
This means certain packets will get dropped while others can pass through. Traffic flows with higher priority are favored over those with lower priority. Traffic flows with equal priority should be shaped in a way that each traffic flow gets a fair share of the available bandwidth. The drop probability distributions per priority need to be defined in the Shaper Config.
It also includes for which priorities the starvation of traffic flows with lower priorities is allowed, which is typically only for the highest priority class. It also needs to include a minimum bandwidth per traffic flow. If this threshold is reached within the same priority class, it is better to drop a complete traffic flow, rather than to shape all traffic flows equally. The service class of the traffic flows is not considered for traffic flow prioritizing, as differentiation based on service class is done in the Scheduler see Section 4.
Still, the Shaper can look at the service class of each packet to inspect whether a packet can be dropped if service class requirements cannot be met for the specific packet. This could happen e. In this case, the packet would arrive at its destination too late anyway and could already be dropped at the Shaper in order not to spoil bandwidth on the wireless links.
Implementation of the Shaper can be done with Click Modular Router e. The traffic that has successfully passed all preceding modules finally arrives at the Scheduler, which allocates the traffic to the appropriate link, based on matching the service class of the traffic flow with the delay and jitter properties of the link. The Scheduler will signal to the Shaper what traffic flows are mapped onto what links, so the Shaper knows what traffic flows to shape when a link is becoming overloaded. For the Shaper to know when a link is becoming overloaded, the Scheduler signals the load on each link.
Therefore, it uses the principle of active queue management AQM [ 34 ]. When the queue occupation for a link is lower than a certain minimum threshold, the Scheduler signals the Shaper to allow more traffic. When the queue occupation exceeds a certain maximum threshold, the Scheduler signals to allow less traffic.
This is similar to the random early detection RED [ 34 ] mechanism which can also be used in the Shaper, see Section 4. Traffic flow priority is no longer considered here as this was already done in the Shaper. Service classes that require e. Once a traffic flow has been mapped onto a certain link, all following packets of the traffic flow will be allocated to the same link. This is done in order to reduce jitter.
Only when this link goes down, the traffic flow will be rescheduled. They are indicated in Figure 4. The interface information module defines the type and characteristics of the interfaces that are needed to connect to NOPs. Instead, other measures will have to be taken at the wayside see Sections 4. The QoS Config contains the requirements per service class and priority, as well as the rule set how to determine what traffic flows will be categorized into what service class and priority.
The above only lists the SLA restrictions. The QoS Config contains the requirements per service class, while the interface information determines what each T2W link can offer. Based on this combination, the QoS Link Mapping deducts the supported service classes per link. All traffic flows for which the service class requirements cannot be met, need to be pro-actively rejected. There is no point in sending them over the wireless links as they will be discarded at their destination. Therefore, the Admission Control will signal the service classes that are currently not supported to the SLA Enforcer, which will drop the relevant traffic flows.
The Admission Control knows which service classes are no longer supported by combining information from the QoS Link Mapping, which states what services classes are supported over which link, from the Monitor, which reveals which links are currently available, and from the Link Prediction, which calculates what tunnels are likely to disappear within very short time.
- EuQoS: End-To-End QoS over Heterogeneous Networks - IEEE Conference Publication;
- Vault of the Ages.
- Building a Data Warehouse: With Examples in SQL Server (Experts Voice).
Based on the information of these three modules, the Admission Control can calculate the service classes that are currently supported and will still be supported in near future and those that are not. This way, end devices can optionally subscribe to events that the Application Interface will generate to indicate the availability of e.
The end devices can use this information for their internal reasoning to find a suitable moment to start a certain application e. These events will be generated based on the input from the Monitor. This component has a view on the performance of the wireless links and tells the Application interface what the available bandwidth, jitter and delay is. Each new traffic flow that belongs to a service class that cannot be supported by the network will be dropped by the SLA Enforcer see Section 4.
This functionality needs to be performed in the SLA Enforcer, before any other processing.
USB2 - Quality of service in a heterogeneous network - Google Patents
If this would only be done in any of the next modules, those traffic flows would unfairly be taken into account by the SLA Enforcer and add up to the consumed data volume or data rate. This would result in less bandwidth or data volume than end users are entitled to.
When the data rate of the data traffic flows is adapted by the Shaper, its buffers will fill up and the source node should need to throttle back. When the queue's occupation has reached a certain threshold, it can start dropping some random packets. This way, the congestion control of the relevant source node will react and decrease its send rate. This will also decrease the rate at which the Shaper's buffers are filled up. Instead of dropping packets, they could also be marked using Explicit Congestion Notification ECN , which would lead to the same data rate decrease but without overhead retransmission.
AQM is a better solution than just waiting for buffer overflows 'tail drop' to happen, as the latter would lead to TCP synchronization among the different source nodes. All nodes would take measures for congestion and the network will become under-utilized firstly and flooded afterwards when all nodes are increasing their send rate once again.
When the data payload is encrypted, no deep packet inspection can be performed by the Marker. The Marker can only look at the headers if these are not encrypted to determine the traffic flow, service class and priority. When a link goes down, the packets that were scheduled for this link need to be rescheduled to another link. The question rises how to make an appropriate data structure for the buffer implementation for the scheduled packets within the Scheduler, in order to still allow rescheduling.
A first suggestion to implement the buffers of the Scheduler, would be to have a FIFO queue per service class. When a link polls for a packet, a scheduling algorithm will then select the queue from which the first packet could be popped and sent. However, as we do not spread a traffic flow across multiple links, we need to check if the considered packet belongs to a traffic flow that is mapped to this link.
Download Product Flyer
If this was not the case, the next service class queue should be considered. If the first packet in each of the service class queues belongs to a traffic flow that is mapped to another link, the traffic would stall on the link that polled for a packet. However, when a link goes down, the packets that are still present in the relevant queues would need to be moved into the queues of another link.
Putting them at the end of the queue would not be fair, as they should rather be merged based on their time of arrival in the queues. A solution to avoid the disadvantage of possibly stalling links, as in the first suggestion, and of having to merge queues, as in the second suggestion, is to use a data structure per service class which allows to select any packet rather than only the first one, e.
This way, the Scheduler will first select the hashmap of the appropriate service class and then take the next packet out of it that belongs to any of the traffic flows that are mapped on the link that polls for a packet. If a SLA allows a certain amount of bandwidth or data volume to be used, with e. Within this TWCS architecture, we aim to centrally optimize the overall bandwidth usage within modules that are jointly referred to as the 'PEP'.
Theoretical perspectives on end-to-end QoS frameworks over heterogeneous networks
We discuss the relevant data modules and their order in Sections 5. The control modules are described in Section 5. Finally, we tackle the implementation challenges concerning the PEP in Section 5. In this section, we discuss the relevant modules in the data plane see Figure 3 that deal with bandwidth optimization: the traffic optimizers and the accelerator. Traffic optimizer 1 TO1 is a module which tries to decrease the load on the wireless links.
It can instantly reply to a device with the information it requested, without always having to send data over the T2W link by. The functioning of TO1 will thus mostly be situated at the application layer of the open systems inter-connection OSI model [ 44 ] and includes typically some kind of transparent caching proxies, such as a web proxy, a domain name server DNS cache and a simple mail transfer protocol SMTP proxy. If a traffic flow is eligible for this kind of traffic optimization, the connection is terminated here and the TO1 replies with locally cached content or it sets up a new connection with the destination server on the other side if there was no cached data available or if the cached data was outdated.
The web proxy can be especially useful for the 'Passenger Internet' and 'Crew Intranet' services see Section 2. Web browsers do not need to explicitly configure the web proxy in their settings this would not be scalable and would be too difficult for passengers to configure , but the proxy will operate transparently. Furthermore, all services could benefit from a DNS cache in TO1, as DNS is an ideal candidate for caching, and thus for performance gain, as it is designed as a hierarchical distributed naming system with DNS records having a long lifetime, typically in the order of a couple of hours.
A negative cache, which maintains unresolvable records, could also be kept. As the slow propagation in the whole DNS system does not support fast addition and deletion of records, the cache should neither. Whereas the web proxy and DNS cache will decrease the network load by responding with locally cached copies, an SMTP proxy is meant for email relaying and will always have to forward the email that originated from an end user.
Therefore, the email can be locally stored in the SMTP proxy and be forwarded at a slower rate or only when there is enough free capacity available over the wireless links. The TO1 module will likely prove to be the most useful within the onboard MCE rather than in the WCE , as it is more likely that the server application will reside on the wayside instead of on the train.
Nevertheless, there might be some specific uses for a caching on the wayside, so this module can be implemented at both sides. For SMTP proxying, a widely known open source agent is sendmail [ 47 ]. The second traffic optimizer TO2 is a module which aims to reduce the actual bandwidth of the traffic flows by using data compression.
- Graphics Gems.
- Quality of service statistics over heterogeneous networks: Analysis and applications;
- About This Item?
- A Guide to the Beetles of Australia;
- Board Review Series: Neuroanatomy.
The counterpart on the receiving end does exactly the opposite, it decompresses data so the receiver perceives the data traffic flow as unaltered. Whereas TO1 thus tried to decrease the traffic load that is to be sent over the wireless networks, TO2 will now forward all incoming traffic, but it will optimize the given load in order to consume less bandwidth. TO2 should inspect the data traffic flows to check whether it is useful to perform data compression, as there are a number of cases where compression by TO2 is unwanted:.
Data compression makes sense for e. The latter type of data traffic consists of small packets which should not be delayed by data compression. When original data is encrypted, e. Data compression tries to remove statistical redundancy. However, encrypted data appears to be completely random data without any statistical redundancy. Similarly, it could be useful to check whether the data has been compressed already, as additional compression is in this case not likely to further decrease the data size but could even be counterproductive due to the extra control information.
However, modern implementations of web servers and browsers tend to do this themselves. This means that compression by the TWCS will only be useful in about a third of the cases for web pages. When considering the increasing importance of multimedia content over traditional webpages, the importance of compression by the TWCS will decline as multimedia content is typically already compressed. Another option could be to introduce lossy compression. For pictures, one could e. When considering web pages, about two thirds [ 48 ] of the actual transmitted size of a web page is made up by images, so a rather significant performance gain could be expected.
However, introducing additional lossy compression requires adequate knowledge about the considered use case and whether this degraded content provision is acceptable for the end user. Furthermore, as content is now actually being altered, one finds oneself in a jurisdictional gray zone.
For data compression, an open source implementation of a compression proxy exists, called ZIP Proxy [ 49 ]. Next, the data of the traffic flow is disposed of its TCP mechanism and the payload is encapsulated in UDP datagrams to still maintain the necessary control information, e. This way, there is no competition for bandwidth by the TCP congestion mechanisms of each individual TCP data traffic flow. This leads to an increased overall system throughput. On the receiving side, the Accelerator sets up a new TCP connection for each traffic flow, based on the encapsulated original control information.
This way, the destination endpoint will not notice that the TCP connection was split by the Accelerators. The behavior of this Accelerator is thus distributed between a sending module and a receiving module, contrary to typical TCP accelerators. Space communication protocol standards SCPS is another TCP accelerator, originally mainly intended for satellite links, which consists of a transmitter and a receiver component. However, as depicted in Figure 6 , it would still be unfair not to restrict the achievable data rates on the onboard network.
The reason for this is that the cached data has once been transferred over the wireless link on request of a certain user, say Alice. This happened most probably at a slower rate than the achievable onboard data rate that is obtained when another user, say Bob, is requesting the same data and is provided with a locally cached copy. Furthermore, TO1 is to be put before the Shaper. Within TO1 see Section 5. Therefore, the information traffic flow carrying the request does not need to be shaped onto the outgoing wireless links and TO1 is placed before the Shaper.
This is explained as follows. Suppose TO2 was put before the Shaper and two users, say Alice and Bob, are sending the same amount of data and having the same SLA concerning maximum data rate. Furthermore suppose Alice is sending a compressible data traffic flow, allowing for TO2 to optimize the traffic by decreasing the amount of data to be sent, while Bob is sending an incompressible data traffic flow, for which TO2 optimizations are superfluous. In this case the Shaper receives less bytes in the optimized traffic flow that originated from Bob than in the one from Alice.
However, it will send an equal amount of bytes for both traffic flows that it receives from TO2, as the SLAs of Alice and Bob are the same. When considering the original data that Alice and Bob sent, this results in a higher effective throughput for Alice, compared to Bob, see Figure 7a , which would be unfair. Therefore, TO2 is placed after the Shaper, see Figure 7b.
One can still try to argue that the Shaper see Section 4. However, the excess capacity will be signaled from the Scheduler see Section 4. The Shaper can thus be shaping for a total capacity that is higher than the effective available aggregate capacity on the links. In this way, all data traffic flows benefit from optimizations in TO2, rather than a single user if the TO2 would be placed before the Shaper. Comparison of two options for the relative positioning of the Shaper and TO2, which has a data compression functionality, while considering two users with the same SLA, Alice and Bob, who are sending a compressible and incompressible data traffic flow, respectively, at the same time.
TO1 does not have to exchange information with other modules, as it can work perfectly on its own. On the other hand, the TO2 in the MCE passes information to TO2 in the WCE or vice versa, in order for the receiving module to know what compression algorithm was applied and how to decompress the data. Likewise, the Accelerator in the MCE also passes information to the Accelerator in the WCE or vice versa, in order for the receiving module to know what the end destination of the original TCP traffic flow is.
For this end, the UDP protocol could e. Communication with other modules is unnecessary and the TO1, TO2 and the Accelerator were therefore not included in Figure 4. When implementing caching proxies in TO1 , hardware constraints are to be considered. A caching web proxy can easily take a significant quantity of storage, which may not be available in a restricted environment as is common in ruggedized railway equipment. A good trade-off should be made between the decrease in load on the wireless links and the storage cost.
Likewise, for compression proxies in TO2 a trade-off should be made between the decrease in load on the wireless links and the cost of a more powerful processor. This is because, depending on configuration and choice of algorithms, a compression proxy can consume a considerable amount of processing power. Quality-of-service QoS , resource allocation, inter-domain routing, queuing service disciplines.
Advanced Search. Privacy Copyright. Skip to main content Purdue e-Pubs. Schrijf een review. E-mail deze pagina. Engels januari pagina's Alle productspecificaties. Productbeschrijving The Internet has evolved from an academic network for data applications such as le transfer and net news, to a global general-purpose network used for a variety of different applications-electronic mail, voice over IP, television, peer-to-peer le sharing, video streaming and many more. The heterogeneity of applications results in rather different application requirements in terms of bandwidth, delay, loss, etc. Ideally, the underlying network supports Quality-of-Service parameters such that - plications can request the desired services from the network and do not need to take actions by themselves to achieve the desired communication quality.
Initially, the Internet was not designed to support Quality of Service, and only in the last decade have appropriate mechanisms been developed.