Diverse client devices differ in hardware, package and web connectivity.Each client requires specific content for efficient rendition of different versions of same informations with high image declaration and high degree of text summarization.Versatile Transcoding Proxy ( VTP ) architecture was proposed to transform the corresponding informations or protocol harmonizing to the user ‘s specification. Its effectivity has been enhanced by following dynamic cache classs and by suggesting the strategy Maximum Profit Replacement with Dynamic Cache Categories ( DCC-MPR ) .From the leaden transcoding graph, hoarding campaigner set is generated by the construct of dynamic scheduling and so hoard replacing is performed. A version which helps in cut downing the transcoding hold of other version is given the high precedence. Based on the device capableness and the user petition, the corresponding transcoded version is sent either from cache or after transcoding them in instance of absence of that version in the cache. Hoarding a version is done based on its popularity, size from which campaigner set is generated. Cache replacing is done based on generalised net income.

Keywords- hoarding campaigner set, weighted transcoding graph, generalised net income, Dynamic Cache Categories


The promotion in the field of nomadic communicating engineering led the users of nomadic communicating to entree the Internet at any topographic point and at any clip through any nomadic device. These nomadic devices are heterogenous and so they differ in everything from constituents to the computer science capableness. Besides the bandwidth of nomadic communicating is limited and therefore the traditional content of a web object for desktop computing machines might non be suited for a nomadic device. Hence, there is a desire to transcode the content to a debauched format that is more appropriate to be presented on the nomadic devices

Transcoding is the procedure of change overing media file or object from one format to another. Transcoding is frequently used to change over picture formats ( i.e. , Beta to VHS, VHS to QuickTime, and QuickTime to MPEG ) . But it is besides used to suit HTML files and artworks files to the alone restraints of nomadic devices and other Web-enabled merchandises. These devices normally have smaller screen sizes, lower memory, and slower bandwidth rates. In this scenario, transcoding is performed by a transcoding proxy waiter or device, which receives the requested papers or file and uses a specified note to accommodate it to the client.

Transcoding systems can be divided into three categories harmonizing to the location where the Transcoding procedure takes topographic points, i.e. , the client-based, the server-based, and the proxy-based transcoding systems. In the client-based attacks, transcoding is left for nomadic clients for considerations. The advantage of this attack is that it can continue the original semantic of system architecture and conveyance protocols. It is noted that transcoding at client side is dearly-won due to the restriction in both bandwidth and calculating power. On the other manus, transcoding at server side is non flexible to fulfill the client ‘s demand and will necessitate excessively much unneeded storage. An intermediate placeholder is able to on-the-fly transcode the requested object to a proper version harmonizing to the client ‘s specification before it sends this object to the client. Therefore, the transcoding system is frequently implemented at an intermediate placeholder.

Conventional transcoding placeholders can be farther divided into two different categories harmonizing to the manner how the transmutation logic is applied. The first category is referred to as the fixed transcoding placeholder, where the transcoding placeholder simply transcode the input into the end product without any context-aware processing. The 2nd category is referred to as the heuristic transcoding placeholder, which is able to read the capableness profile from the client device and attempts to transform the content harmonizing to the device capableness.

However, since the placeholder does non hold any cognition about which information is of import, it is hard to find the transmutation scheme of the content in instance of heuristic attack. Although recent research workers have proposed assorted heuristics, these heuristics still suffer from the loss of of import information or the loss of chances for better transcoding. Both the fixed Transcoding placeholder and the heuristic transcoding placeholder belong to massive transcoders, intending that they can merely supply Transcoding services to the content type or the protocol which can be recognized in progress.

When there is a desire to cover with a new content type or a communicating protocol, the ascent of the whole architecture is inevitable. This consideration makes it instead hard to keep the transcoding placeholder system.

The architecture of versatile transcoding placeholder ( denoted by VTP ) for Internet content version has been proposed.In this model, the placeholder can accept and put to death the transcoding penchant book provided by the client or the waiter to transform the corresponding informations or protocol harmonizing to the user ‘s specification so that the proxy waiter can avoid the uncertainness of the heuristic transcoding placeholder. VTP architecture can besides be used to transcode many types of client-server systems, with the enrollment of the standard CC/PP ( Composite capabilities/preferences profile ) device capableness profile and constitutional CC/PP parser the proposed versatile Transcoding proxy realizes context consciousness. Therefore, it is adaptable to heterogenous devices.

Existing media hoarding systems treat each client petition every bit and independently. Assorted different bit-rate versions of the same picture cartridge holder may be cached at the placeholder at the same clip, which is a waste of storage. Furthermore, to heighten the effectivity of the VTP architecture, a transcoding strategy DCC-MPR is proposed to keep the cached objects and execute cache replacing in the transcoding placeholder. Two of import constituents are contained in scheme Maximum Profit Replacement with Dynamic Cache Categories ( DCC-MPR ) : mechanism DCC and algorithm MPR.

Scheme DCC offers all right coarseness control in the figure of cache classs by constructing a leaden transcoding graph which depicts the transcoding relationship among transcodable versions dynamically. From the transcoding relationship among classs, algorithm MPR performs cache replacing harmonizing to the content in the caching campaigner set, which is generated by the construct of dynamic scheduling.

Algorithm MPR can be divided into two stages. The first stage performs when the placeholder has sufficient infinite. In this stage, one time the object is queried, it will be cached to increase the net income for future entree. The 2nd stage performs when the placeholder has deficient infinite. In this stage, the cache replacing is performed harmonizing to the precedence of the requested object.

Cache replacing algorithm based on a generalised net income map has been included. It is to measure the net income from hoarding each version of an object. This generalised net income map explicitly considers several new emerging factors in the Transcoding placeholder and the aggregative consequence of hoarding multiple versions of the same object. It is noted that the aggregative consequence is non merely the amount of the costs of hoarding single versions of an object, but instead, depends on the transcoding relationship among these versions. The impression of a leaden transcoding graph is devised to measure the corresponding aggregative consequence expeditiously.

Using the generalised net income map and the leaden transcoding graph, we propose, in this paper, an advanced cache replacing algorithm for transcoding placeholders. In add-on, an effectual information construction is designed to ease the direction of the multiple versions of different objects cached in the transcoding placeholder. It is shown that the algorithm proposed systematically outperforms comrade strategies in footings of the hold salvaging ratios and cache hit ratios. The feasibleness of such a differentiated service strategy depends on the handiness of a scope of fluctuations for the content so that the waiter can take the right fluctuation given request conditions, while the content supplier can manually supply a figure of different fluctuations for usage by the system.

The remainder of this paper is organized as follows. In Section II, we introduce some related old plants. In Section III, we present the inside informations of planing the VTP architecture, and discourse the advantageous characteristics and the utility in the waiter transcoding penchant application. In Section IV, design of scheme DCC-MPR is described. Section V includes the experimental rating of the strategy. Finally, this paper concludes with Section VI.

Related Work

Proxy Caching for Video-on-Demand Using Flexible Get downing Point Selection [ 2 ] assumes that the connexion between the clients and the placeholder is characterized by high transmittal rate and low latency. If the requested picture content has already been accessed by one user and is cached on the placeholder, the initial hold for the 2nd and ulterior users is signii¬?cantly decreased compared to the instance when content has to be loaded from the distant waiter.

New Stream Caching Schemas for Multimedia Systems [ 11 ] proposed a new multimedia hoarding scheme that includes several optimisations to the province of the art. This algorithm takes its roots from the interval hoarding algorithms but it evolves towards a more adaptative approaching that could obtain a better public presentation for variable bit-rate watercourses and functioning media stored on multiple discs following different distributions.

QoS-Adaptive Proxy Caching for Multimedia Streaming over the Internet [ 15 ] proposed a media characteristic-weighted replacing policy is proposed to better the cache hit ratio of assorted media including uninterrupted and discontinuous media. Second, a network-condition- and media-quality-adaptive resource-management mechanism is introduced to dynamically re-allocate cache resource for different types of media harmonizing to their petition forms.

A pre-fetching strategy is described based on the estimated web bandwidth, and a miss scheme to make up one’s mind what to bespeak from the waiter in instance of cache girls based on real-time web conditions is presented in [ 15 ] . Request and send-back programming algorithms, incorporating with unequal loss protection ( ULP ) , are besides proposed to dynamically apportion web resource among different types of media. The prefix of a multimedia watercourse is stored in the placeholder in progress. Upon having a petition for the watercourse, the placeholder instantly initiates transmittal to the client while at the same time bespeaking the staying part from the waiter. In this manner, the latency between waiter and placeholder has been hidden.

A QoS-adaptive girl scheme and pre-fetching algorithm that to the full consider the continuous- and non-contiguous-media features are besides described in [ 15 ] . A leaden petition scheduling strategy that expeditiously allocates the web resource between placeholder and waiter among different type of petitions together with a send-back programming strategy that expeditiously utilizes the web resource between client and placeholder based on media features are proposed and analysed.

To keep inter relationship among cache points and to execute cache replacing RESP ( Replacement with Shortest Path ) [ 7 ] model has been framed. It contains two primary constituents, i.e. , process MASP ( standing for Minimum Aggregate Cost with Shortest Path ) and algorithm EBR ( standing for Exchange-Based Replacement ) . Procedure MASP maintains the interrelatedness utilizing a shortest way tabular array, whereas algorithm EBR performs cache replacing harmonizing to an interchanging scheme. To keep the transcoding graph, process MASP uses a tabular array to enter the shortest way information and determines the transcoding beginnings harmonizing to the tabular array. To execute cache replacing, algorithm EBR uses a more proi¬?table hoarding campaigner to interchange less proi¬?table elements in the cache so as to maximise the proi¬?t of cached elements. The experimental consequences show that the proposed RESP model outperforms algorithm AE in cache hit ratio. Under many fortunes, the RESP model can come close the optimum solution really efficaciously. Furthermore, the RESP model costs much less calculating complexness in treating user questions.

We compare the proposed DCC-MPR to the algorithm AE adopted in [ 14 ] . During the experiments, unlike the assorted prosodies used in [ 14 ] , we choose the tightest one, the exact hit ratio, to formalize the public presentation of DCC-MPR. The exact hit ratio is defined as the fraction of petitions which are satisfied by the exact versions of the objects cached. This metric is besides motivated by the fact that we normally intend to supply an exact version to users ( instead than an overqualified one ) for effectual bandwidth usage. Since the comparing between AE and other strategies has been made in [ 14 ] , we focus on comparing the public presentation between DCC-MPR and AE in this paper.

Cache Replacement for Transcoding Proxy Caching [ 10 ] addresses the job of cache replacing for transcoding proxy caching. An efficient cache replacing algorithm is proposed. That algorithm considers both the aggregative consequence of hoarding multiple versions of the same multimedia object and cache consistence. If the cache size is big plenty, the job becomes fiddling since all objects can be stored in the cache such that the entire entree cost is minimized. As a consequence, cache replacing algorithms are used to find a suited subset of web objects to be removed from the cache to do room for a new web object.

Existing cache replacing algorithms can non be merely applied to transcoding proxy caching because of the new emerging factors in the environment of transcoding placeholders, such as the extra hold caused by transcoding, different sizes and different mention rates for different versions of a multimedia object, and the aggregative consequence of the net incomes of hoarding multiple versions of the same multimedia object. An efficient cache replacing algorithm for transcoding placeholders, AE in short, which selects objects to take from the cache based on their generalized proi¬?t map one by one. When one object is removed from the cache, the generalised proi¬?ts for the relevant objects will be revised. If the free infinite can non suit the new object, another object with the least generalised proi¬?t is removed until adequate room is made for the new object. However, this method is non optimum when there is more than one object to be removed.


Versatile Transcoding Proxy architecture consists of the three primary constituents: the service agent, the transcoding agent and the Transcoding Preference Script ( TPS ) . TPS controls the behavior of the VTP ( Versatile Transcoding Proxy ) waiter. TPS is used to find the belongings of a device, and inquire the associated package agents to supply the suited parametric quantities for transcoding. Transcoding agent is responsible for the transcoding actions that are taking topographic point. Transcoding agent takes determinations based on the TPS. Service agent gets the TPS and decides on the object that is needed to be transcoded. Leaden transcoding graph depicts the transcoding relationship among the transcodable versions. Hoarding Candidate sets are selected by utilizing the dynamic scheduling algorithm. MPR algorithm consists of two stages two stages. In Phase I, Proxy has sufficient infinite to hoard the requested point. In Phase II, Proxy has deficient infinite to hoard the requested point.

Fig. 1 Overall Architecture


In this subdivision, we describe the modified strategy DCC-MPR to keep the cache classs of a transcoding placeholder dynamically. Section IV-A, includes the mechanism of the leaden transcoding graph and the relevant processs. Based on the net incomes of the versions of each object, the dynamic scheduling algorithm is proposed in Section IV-B.

In Section IV-C, the modified Maximum Profit Replacement Phase-I algorithm is proposed for a transcoding placeholder to execute cache replacing. The database D is regarded as the aggregation which contains all possible objects and relevant versions. For each object I, the figure of transcodable versions, i.e. , the classs, is denoted by Ni. The original version of object I is denoted as Oi,1, whereas the least elaborate version which can non be transcoded any more is denoted as oi, Ni.

Weighted Transcoding Graph Generation

Weighted transcoding graph is used to stand for the transcoding relationship among the assorted transcodable versions of an object. Transcoding cost of a version is the weight associated with that version. Transcoding relationship will be updated at the same time as and when the versions are added. The leaden transcoding graph, Gi, is a directed graph with weight map Wisconsin. Gi depicts the transcoding relationship among transcodable versions of object. Each vertex V E V [ Gi ] represents a transcodable version of object. In order to keep dynamic cache classs, three processs, i.e. , AddCate ( Gi ) , RemoveCate ( Gi ) , and

GetSubgraph ( Gi, V ‘ ) are defined.


Fig. 2 Exemplifying illustration of mechanism DCC. ( a ) Weighted transcoding graph ( B ) Procedure AddCate ( Gi ) ( degree Celsius ) Procedure RemoveCate ( Gi ) ( vitamin D ) Procedure GetSubgraph ( Gi, ,V ‘ )

Hoarding Candidate Set Selection

Hoarding campaigner set has the objects or the versions of the objects that are profitable to be cached. The hoarding campaigner set is selected by utilizing the dynamic scheduling algorithm. The symbols used in this subdivision are given in Table I.For a given leaden transcoding graph Gi, the net income maps including the remarkable net income, aggregative net income, fringy net income and generalised net income, are defined as follows:

Definition 1: PF ( oi, J ) is defined as the remarkable net income of hoarding while no other version of object is cached

where E?ˆGi?ˆ represents the aggregation of all borders in graph Gi. It is noted that the mention rate Rhode Island, ten reflects the popularity, whereas the term ( di + ?i ( 1, ten ) ? ?i ( J, x ) ) is regarded as the hold economy.

Definition 2: PF ( oi, j1, oi, j2, … , oi, jk ) is defined as the aggregative net income of hoarding at the same clip ( oi, j1, oi, j2, … , oi, jk ) at the same clip

Where G’i is the subgraph derived from the process GetSubgraph ( G’i, { oi, j1, oi, j2, … , oi, jk } ) .

Definition 3: PF ( oi, J, oi, j1, oi, j2, … , oi, jk ) ) is defined as the fringy net income of hoarding oi, J, given that oi, j1, oi, j2, … , oi, jk are already cached where where j?j1, j2, ..jk.

Definition 4: The generalised net income, Gb, J, of the object oi, J is defined as Gb, J = pi, J / Si, J.In order to find the points which should be cached in the transcoding placeholder, we define the caching campaigner set, denoted by DH. The hoarding campaigner set DH, which is a subset of D contains the points with high precedence to be cached.

Definition 5: The entire net income of DH, denoted by PH, is defined as the summing up of the net income of all informations points, including the original and transcoded 1s, in DH. The entire size of DH, denoted by SH, is defined as the summing up of the object size of all informations points in DH.

Candidate set choice algorithm includes the cache size restraint, Zc and the database D as input. This process is based on the net income calculated for the assorted versions of the objects. The hoarding campaigner set, DH is the end product got as a consequence of this algorithm.

Table I


The dynamic scheduling algorithm used to choose the caching campaigner set is below:

Modified MPR Algorithm Phase-I

The modified Maximum Profit Replacement algorithm has two stages: Phase I deal with the sufficient infinite in the cache whereas Phase II trades with the deficient infinite in the cache. When a peculiar object is to be cached, the cache infinite is checked before hoarding the object for its sufficient infinite to suit the versions.Flow of MPR Phase I is shown in Fig. 2.

Fig. 2 Flowchart of modified MPR algorithm PhaseI

MPR Phase I include the assorted determinations doing based on the conditions that are met. The conditions include cache hit and cache girl. The cache hit occurs when the object requested by the client is present in the cache and the Cache girl occurs when the object requested by the client is non available in the cache. So an object has to be put into the cache for functioning the client petition. The cache infinite is compared with the size of the requested object. If the staying available cache infinite is sufficient so the requested object is cache.

When cache girl occurs, the direct transcoding of the assorted objects and the versions of the objects in the cache are done. The transcoded versions of the objects are cached after look intoing the cache infinite. If non, the needed version of the object or the object is searched in the campaigner set for its handiness. If the object is found to be available in the campaigner set, so it is cached by look intoing the available staying infinite in the cache. If required object is non available in the campaigner set so, it is searched in the database for its handiness. If the object is found to be available in the database, so it is cached by look intoing the available staying infinite in the cache.

The cache now contains the needed object or the needed version of the object. So this needed version of the object or the object itself is made available for the user to see through the client device.


The rating of the strategy MPR Phase I was done for the assorted picture formats like AVI ( Audio Video Interleave ) , MPEG ( Traveling Pictures Expert Group ) and QuickTime format. The experiment involved about 100 picture files of changing file extensions like.avi, .mpg and.mov. The response clip of the system is calculated for assorted operations that are carried out after the user sends his petition. The response clip is found to be varied based on the operation that is carried out and besides it is found to change based on the system response time.The response clip of the system is tabulated as shown in table II.


Response Time recovering picture file




Size ( megabit )

Response clip ( MS ) -CandidateSet

Response clip ( MS )












The transcoding of the picture file is done and the response clip for the same is besides calculated. For case, the transcoding of the picture file from the speedy clip format to AVI format has a clip to transcode clip of 6969ms as shown in table III.

Table Three

Time to Transcode for the picture file



Time to Transcode ( MS )

Conversion Type









It is clear from table II and III that the proposed strategy proves to be efficient since the response clip of recovering the file from campaigner set and database is lesser than transcoding the file straight to the needed format.


In this paper, modified Maximal Profit Replacement with

Dynamic Cache classs ( DCC-MPR ) are proposed. DCC MPR provides dynamic cache classs and besides does the replacing of the objects in the cache. The modified DCC MPR performs cache replacing harmonizing to the content in the caching campaigner set. The hoarding campaigner set is generated utilizing the dynamic scheduling construct, which is based on the leaden transcoding graph. Weighted transcoding graph is generated and updated dynamically upon add-on and omission of the versions. Modified DCC-MPR has better public presentations in many facets compared to the DCC-MPR strategy in conventional transcoding placeholder system.

Leave a Reply

Your email address will not be published. Required fields are marked *