{"id":844,"date":"2021-01-26T13:46:50","date_gmt":"2021-01-26T12:46:50","guid":{"rendered":"https:\/\/websites.fraunhofer.de\/video-dev\/?p=844"},"modified":"2021-01-26T13:46:51","modified_gmt":"2021-01-26T12:46:51","slug":"mse-a-story-of-append-windows-presentation-timestamps-and-video-buffers","status":"publish","type":"post","link":"https:\/\/websites.fraunhofer.de\/video-dev\/mse-a-story-of-append-windows-presentation-timestamps-and-video-buffers\/","title":{"rendered":"MSE: A story of append windows, presentation timestamps and video buffers"},"content":{"rendered":"\n<p>To some degree, we can compare a DASH MPD to a novel. A novel usually has multiple chapters (DASH periods) with exciting parts (high quality segments) and rather boring parts (low quality segments). Sometimes, we even skip some chapters (seek). Or, if you are like me and it is late (high presentation time), you lose concentration (your buffer is full) and fall asleep while reading (the end of the append window is reached). I admit, this was not the best analogy, but I could not think of a better transition to the main topic of this post: <strong>append windows, presentation timestamps, IDR frames and their impact on the media buffer.<\/strong><\/p>\n\n\n\n<p>In this blog post, we will examine how the position of Instant Decoder Refresh (IDR) frames and earliest presentation time (EPT) of a video segment interact can with <a href=\"https:\/\/www.w3.org\/TR\/media-source\/\" target=\"_blank\" rel=\"noreferrer noopener\">Media Source Extensions <\/a>(MSE) attributes <a href=\"https:\/\/www.w3.org\/TR\/media-source\/#dom-sourcebuffer-appendwindowstart\" target=\"_blank\" rel=\"noreferrer noopener\">appendWindowStart<\/a>, <a href=\"https:\/\/www.w3.org\/TR\/media-source\/#dom-sourcebuffer-appendwindowend\" target=\"_blank\" rel=\"noreferrer noopener\">appendWindowEnd<\/a> and <a href=\"https:\/\/www.w3.org\/TR\/media-source\/#dom-sourcebuffer-timestampoffset\" target=\"_blank\" rel=\"noreferrer noopener\">timestampOffset<\/a>. Some of the fundamentals of this blog post are covered in more detail in our previous blog post <a href=\"https:\/\/websites.fraunhofer.de\/video-dev\/common-pitfalls-in-mpeg-dash-streaming\/\" target=\"_blank\" rel=\"noreferrer noopener\">&#8220;Common pitfalls in MPEG-DASH streaming&#8221;<\/a>. For now, we assume that the reader is familiar with the concept of adaptive streaming and the basic structure of a DASH Media Presentation Description (MPD).<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">The append window <\/h2>\n\n\n\n<p>MSE-based DASH clients like dash.js and Shaka player use the appendWindowStart and appendWindowEnd attributes of the SourceBuffer object to define an append range. Coded frames with presentation timestamps within this append range are allowed to be appended to the SourceBuffer, while coded frames outside this range are filtered out.  <\/p>\n\n\n\n<p>A DASH client typically aligns the MPD period boundaries with the append window. The start of the period will match appendWindowStart while the end of a period matches appendWindowEnd. That way, all media segments that are appended to the buffer are guaranteed to be included in a period. Frames that overlap the period boundaries are filtered out.<\/p>\n\n\n\n<p>In the basic example depicted below, the presentation timestamps of the first media segment are out of the append window. Consequently, the segment is filtered out. <\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"458\" height=\"121\" src=\"https:\/\/websites.fraunhofer.de\/video-dev\/wp-content\/uploads\/2021\/01\/appendWindowExample-1.png\" alt=\"\" class=\"wp-image-848\" srcset=\"https:\/\/websites.fraunhofer.de\/video-dev\/wp-content\/uploads\/2021\/01\/appendWindowExample-1.png 458w, https:\/\/websites.fraunhofer.de\/video-dev\/wp-content\/uploads\/2021\/01\/appendWindowExample-1-400x106.png 400w\" sizes=\"auto, (max-width: 458px) 100vw, 458px\" \/><\/figure>\n\n\n\n<p>Now, we can ask ourself what happens if we only <em>partially<\/em> exclude a segment. For instance, see what happens if we only want to exclude the first part of the first segment:<\/p>\n\n\n\n<figure class=\"wp-block-image size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/websites.fraunhofer.de\/video-dev\/wp-content\/uploads\/2021\/01\/Copy-of-appendWindowExample.png\" alt=\"\" class=\"wp-image-849\" width=\"428\" height=\"121\" srcset=\"https:\/\/websites.fraunhofer.de\/video-dev\/wp-content\/uploads\/2021\/01\/Copy-of-appendWindowExample.png 428w, https:\/\/websites.fraunhofer.de\/video-dev\/wp-content\/uploads\/2021\/01\/Copy-of-appendWindowExample-400x113.png 400w\" sizes=\"auto, (max-width: 428px) 100vw, 428px\" \/><\/figure>\n\n\n\n<p>The MSE provides us with a specific note for such use cases:<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\"><p>Some implementations may choose to collect a few of these coded frames with presentation timestamp less than appendWindowStart and use them to generate <strong>a splice at the first coded frame that has a presentation timestamp greater than or equal to appendWindowStart even if that frame is not a random access point<\/strong>. Supporting this requires multiple decoders or faster than real-time decoding, so for now, this behavior will not be a normative requirement.<\/p><\/blockquote>\n\n\n\n<p>The significant part is marked in bold. A video segment usually has an IDR frame right at the start. In some cases where a single segment might contain multiple IDR frames, it is more common to include only a single IDR frame in one segment. So what actually happens if we try this out?<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">The test setup<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Test content<\/h3>\n\n\n\n<p>First, we generate some test content:<\/p>\n\n\n\n<figure class=\"wp-block-table is-style-stripes\"><table><tbody><tr><td>Name<\/td><td>Seg length<\/td><td>EPT first segment<\/td><td>GoP size<\/td><td>FPS<\/td><\/tr><tr><td>2_sec_gop<\/td><td>6 sec<\/td><td>3.000\/90.000<\/td><td>60<\/td><td>30<\/td><\/tr><tr><td>6_sec_gop<\/td><td>6 sec<\/td><td>3.000\/90.000<\/td><td>180<\/td><td>30<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<p>Our test consists of two different configurations for the video segments. Both segment types have a length of six seconds and a framerate of 30 frames per second. We use closed GoPs for both segments. The main difference is the GoP size. The &#8220;2_sec_gop&#8221; segment has an IDR frame every two seconds. This results in three IDR frames per segment. On the other hand, the &#8220;6_sec_gop&#8221; segment only has a single IDR frame at the start. <\/p>\n\n\n\n<p>Interestingly enough, the encoder\/packager combination we used in this test sets the EPT value of the first segment to 3.000\/90.000. For some reason, this was not signaled in the corresponding MPD (no @presentationTimeOffset). In order to account for that, we can set the SourceBuffer@timestampOffset attribute to -3.000\/90.000. That way, we offset the EPT value in the segments and our buffer starts at 0. <\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Test application<\/h3>\n\n\n\n<p>In order to check the buffer, we use the buffered attribute of the video element:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code> video.buffered.start(0) + '-' + self.video.buffered.end(0);<\/code><\/pre>\n\n\n\n<p>The MSE implementation is straight forward and not included in this post. Important changes to MSE attributes are explained in the respective test cases.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Test platform<\/h3>\n\n\n\n<p>The following two tests were conducted on a Macbook Pro device with macOS Catalina 10.15.7 and the following  browser versions:<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>Google Chrome Version 87.0.4280.141<\/li><li>Mozilla Firefox 84.0.2 (64-Bit)<\/li><li>Safari Version 14.0 <\/li><\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">Test Case 1: No append window<\/h2>\n\n\n\n<p>First, we test the behavior without changing the append window. Consequently, the only attribute we need to adjust is the timestampOffset to account for the non-zero EPT value:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>self.sourceBuffer.timestampOffset = -0.03333333333;<\/code><\/pre>\n\n\n\n<p>We don&#8217;t observe any surprises here. As expected, all browsers buffer the segment correctly and we are able to play the first six seconds of our content:<\/p>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:100%\">\n<div class=\"wp-block-group\"><div class=\"wp-block-group__inner-container is-layout-flow wp-block-group-is-layout-flow\">\n<figure class=\"wp-block-table is-style-stripes\"><table><tbody><tr><td>Name<\/td><td>Chrome buffer<\/td><td>Firefox buffer<\/td><td>Safari buffer<\/td><\/tr><tr><td>2_sec_gop<\/td><td>0 &#8211; 6<\/td><td>0 &#8211; 6<\/td><td>0 &#8211; 6<\/td><\/tr><tr><td>6_sec_gop<\/td><td>0 &#8211; 6<\/td><td>0 &#8211; 6<\/td><td>0 &#8211; 6<\/td><\/tr><\/tbody><\/table><\/figure>\n<\/div><\/div>\n<\/div>\n<\/div>\n\n\n\n<h2 class=\"wp-block-heading\">Test Case 2: Adjusted append window<\/h2>\n\n\n\n<p>Now, it get&#8217;s interesting: we&#8217;ve reached the climax of our story ;). We adjust the append window to cut off the first second of our video segments. This means that we are excluding the first IDR frame of both segments from the buffer:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>self.sourceBuffer.appendWindowStart = 1;\nself.sourceBuffer.appendWindowEnd = 6;\nself.sourceBuffer.timestampOffset = -0.03333333333;<\/code><\/pre>\n\n\n\n<p>This time, our results look different:<\/p>\n\n\n\n<figure class=\"wp-block-table is-style-stripes\"><table><tbody><tr><td>Name<\/td><td>Chrome buffer<\/td><td>Firefox buffer<\/td><td>Safari buffer<\/td><\/tr><tr><td>2_sec_gop<\/td><td>2 &#8211; 6<\/td><td>2 &#8211; 6<\/td><td>empty<\/td><\/tr><tr><td>6_sec_gop<\/td><td>empty<\/td><td>empty<\/td><td>empty<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>Chrome and Firefox behave in a similar fashion. Both browsers remove all video frames up until the next IDR frame in the buffer. For the 2_sec_gop content, we end up with a buffer from 2 &#8211; 6 seconds. For the 6_sec_gop content, the buffer remains empty, all media samples are discarded. Safari is even more strict, the buffer remains empty for both segments.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Conclusion<\/h2>\n\n\n\n<p>What can we conclude from our tests? <\/p>\n\n\n\n<p>First of all, not all browsers behave the same way. While Chrome and Firefox were able to recover from the loss of the first IDR frame, Safari did not keep any data in the buffer.<\/p>\n\n\n\n<p>As a consequence, content authors should pay close attention when creating their media segments and authoring their MPDs. A negative eptDelta (EPT &#8211; MPD@presentationTimeOffset) can lead to scenarios in which a media segment is partially shifted out of the start of a period. Worst case scenario, this leads to a large gap in the video buffer. A DASH player needs to manually jump over this gap as automatic gap handling is not supported by the MSE. If the player does not implement gap handling playback will not start. For more information on gap handling check out our <a href=\"https:\/\/websites.fraunhofer.de\/video-dev\/common-pitfalls-in-mpeg-dash-streaming\/\" target=\"_blank\" rel=\"noreferrer noopener\">previous blog post<\/a>.<\/p>\n\n\n\n<p>If you have any additional question regarding our DASH activities or dash.js in particular, feel free to check out our <a onclick=\"wiredminds.trackEvent('TRACK-LINK: Link from:mse-story to:go-dash')\" href=\"https:\/\/www.fokus.fraunhofer.de\/go\/dash\">website.<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>To some degree, we can compare a DASH MPD to a novel. A novel usually has multiple chapters (DASH periods) with exciting parts (high quality segments) and rather boring parts (low quality segments). Sometimes, we even skip some chapters (seek)&#8230;.<\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3,4,2],"tags":[],"coauthors":[6],"class_list":["post-844","post","type-post","status-publish","format-standard","hentry","category-dash-js","category-media-source-extensions","category-mpeg-dash"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.5 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>MSE: A story of append windows, presentation timestamps and video buffers - Video-Dev<\/title>\n<meta name=\"description\" content=\"Examine how the position of Instant Decoder Refresh (IDR) frames and the earliest presentation time (EPT) of a video segment interact with the Media Source Extensions (MSE) attributes appendWindowStart, appendWindowEnd and timestampOffset.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/websites.fraunhofer.de\/video-dev\/mse-a-story-of-append-windows-presentation-timestamps-and-video-buffers\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"MSE: A story of append windows, presentation timestamps and video buffers - Video-Dev\" \/>\n<meta property=\"og:description\" content=\"Examine how the position of Instant Decoder Refresh (IDR) frames and the earliest presentation time (EPT) of a video segment interact with the Media Source Extensions (MSE) attributes appendWindowStart, appendWindowEnd and timestampOffset.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/websites.fraunhofer.de\/video-dev\/mse-a-story-of-append-windows-presentation-timestamps-and-video-buffers\/\" \/>\n<meta property=\"og:site_name\" content=\"Video-Dev\" \/>\n<meta property=\"article:published_time\" content=\"2021-01-26T12:46:50+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2021-01-26T12:46:51+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/websites.fraunhofer.de\/video-dev\/wp-content\/uploads\/2021\/01\/appendWindowExample-1.png\" \/>\n<meta name=\"author\" content=\"Daniel Silhavy\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@dsilhavy\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Daniel Silhavy\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/mse-a-story-of-append-windows-presentation-timestamps-and-video-buffers\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/mse-a-story-of-append-windows-presentation-timestamps-and-video-buffers\\\/\"},\"author\":{\"name\":\"Daniel Silhavy\",\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/#\\\/schema\\\/person\\\/f7e1eee3cb4eae87a59648195014a809\"},\"headline\":\"MSE: A story of append windows, presentation timestamps and video buffers\",\"datePublished\":\"2021-01-26T12:46:50+00:00\",\"dateModified\":\"2021-01-26T12:46:51+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/mse-a-story-of-append-windows-presentation-timestamps-and-video-buffers\\\/\"},\"wordCount\":1128,\"commentCount\":0,\"image\":{\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/mse-a-story-of-append-windows-presentation-timestamps-and-video-buffers\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/wp-content\\\/uploads\\\/2021\\\/01\\\/appendWindowExample-1.png\",\"articleSection\":[\"dash.js\",\"Media Source Extensions\",\"MPEG-DASH\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/mse-a-story-of-append-windows-presentation-timestamps-and-video-buffers\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/mse-a-story-of-append-windows-presentation-timestamps-and-video-buffers\\\/\",\"url\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/mse-a-story-of-append-windows-presentation-timestamps-and-video-buffers\\\/\",\"name\":\"MSE: A story of append windows, presentation timestamps and video buffers - Video-Dev\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/mse-a-story-of-append-windows-presentation-timestamps-and-video-buffers\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/mse-a-story-of-append-windows-presentation-timestamps-and-video-buffers\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/wp-content\\\/uploads\\\/2021\\\/01\\\/appendWindowExample-1.png\",\"datePublished\":\"2021-01-26T12:46:50+00:00\",\"dateModified\":\"2021-01-26T12:46:51+00:00\",\"author\":{\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/#\\\/schema\\\/person\\\/f7e1eee3cb4eae87a59648195014a809\"},\"description\":\"Examine how the position of Instant Decoder Refresh (IDR) frames and the earliest presentation time (EPT) of a video segment interact with the Media Source Extensions (MSE) attributes appendWindowStart, appendWindowEnd and timestampOffset.\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/mse-a-story-of-append-windows-presentation-timestamps-and-video-buffers\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/mse-a-story-of-append-windows-presentation-timestamps-and-video-buffers\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/mse-a-story-of-append-windows-presentation-timestamps-and-video-buffers\\\/#primaryimage\",\"url\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/wp-content\\\/uploads\\\/2021\\\/01\\\/appendWindowExample-1.png\",\"contentUrl\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/wp-content\\\/uploads\\\/2021\\\/01\\\/appendWindowExample-1.png\",\"width\":458,\"height\":121},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/mse-a-story-of-append-windows-presentation-timestamps-and-video-buffers\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"MSE: A story of append windows, presentation timestamps and video buffers\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/#website\",\"url\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/\",\"name\":\"Video-Dev\",\"description\":\"Future Applications and Media - Video Development Blog\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/#\\\/schema\\\/person\\\/f7e1eee3cb4eae87a59648195014a809\",\"name\":\"Daniel Silhavy\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/wp-content\\\/uploads\\\/2019\\\/11\\\/0-1-150x150.jpegccb13c1b60303228bf3c575f3345fe29\",\"url\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/wp-content\\\/uploads\\\/2019\\\/11\\\/0-1-150x150.jpeg\",\"contentUrl\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/wp-content\\\/uploads\\\/2019\\\/11\\\/0-1-150x150.jpeg\",\"caption\":\"Daniel Silhavy\"},\"description\":\"Daniel Silhavy studied Computer Science at the Technical University of Berlin (TUB). He received his Masters degree with the completion of his thesis \u201cAd Insertion in MPEG-DASH\u201d at Fraunhofer Institute for Open Communication Systems (FOKUS) in 2015. Currently, he is employed as a scientific assistant and project manager at the Business Unit Future Applications and Media (FAME). He specializes in the R&amp;D of topics dealing with IPTV and adaptive media streaming.\",\"sameAs\":[\"https:\\\/\\\/www.fokus.fraunhofer.de\\\/fame\\\/team\\\/silhavy\",\"https:\\\/\\\/www.linkedin.com\\\/in\\\/daniel-silhavy-21650a129\\\/\",\"https:\\\/\\\/x.com\\\/dsilhavy\"],\"url\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/author\\\/silhavy\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"MSE: A story of append windows, presentation timestamps and video buffers - Video-Dev","description":"Examine how the position of Instant Decoder Refresh (IDR) frames and the earliest presentation time (EPT) of a video segment interact with the Media Source Extensions (MSE) attributes appendWindowStart, appendWindowEnd and timestampOffset.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/websites.fraunhofer.de\/video-dev\/mse-a-story-of-append-windows-presentation-timestamps-and-video-buffers\/","og_locale":"en_US","og_type":"article","og_title":"MSE: A story of append windows, presentation timestamps and video buffers - Video-Dev","og_description":"Examine how the position of Instant Decoder Refresh (IDR) frames and the earliest presentation time (EPT) of a video segment interact with the Media Source Extensions (MSE) attributes appendWindowStart, appendWindowEnd and timestampOffset.","og_url":"https:\/\/websites.fraunhofer.de\/video-dev\/mse-a-story-of-append-windows-presentation-timestamps-and-video-buffers\/","og_site_name":"Video-Dev","article_published_time":"2021-01-26T12:46:50+00:00","article_modified_time":"2021-01-26T12:46:51+00:00","og_image":[{"url":"https:\/\/websites.fraunhofer.de\/video-dev\/wp-content\/uploads\/2021\/01\/appendWindowExample-1.png","type":"","width":"","height":""}],"author":"Daniel Silhavy","twitter_card":"summary_large_image","twitter_creator":"@dsilhavy","twitter_misc":{"Written by":"Daniel Silhavy","Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/websites.fraunhofer.de\/video-dev\/mse-a-story-of-append-windows-presentation-timestamps-and-video-buffers\/#article","isPartOf":{"@id":"https:\/\/websites.fraunhofer.de\/video-dev\/mse-a-story-of-append-windows-presentation-timestamps-and-video-buffers\/"},"author":{"name":"Daniel Silhavy","@id":"https:\/\/websites.fraunhofer.de\/video-dev\/#\/schema\/person\/f7e1eee3cb4eae87a59648195014a809"},"headline":"MSE: A story of append windows, presentation timestamps and video buffers","datePublished":"2021-01-26T12:46:50+00:00","dateModified":"2021-01-26T12:46:51+00:00","mainEntityOfPage":{"@id":"https:\/\/websites.fraunhofer.de\/video-dev\/mse-a-story-of-append-windows-presentation-timestamps-and-video-buffers\/"},"wordCount":1128,"commentCount":0,"image":{"@id":"https:\/\/websites.fraunhofer.de\/video-dev\/mse-a-story-of-append-windows-presentation-timestamps-and-video-buffers\/#primaryimage"},"thumbnailUrl":"https:\/\/websites.fraunhofer.de\/video-dev\/wp-content\/uploads\/2021\/01\/appendWindowExample-1.png","articleSection":["dash.js","Media Source Extensions","MPEG-DASH"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/websites.fraunhofer.de\/video-dev\/mse-a-story-of-append-windows-presentation-timestamps-and-video-buffers\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/websites.fraunhofer.de\/video-dev\/mse-a-story-of-append-windows-presentation-timestamps-and-video-buffers\/","url":"https:\/\/websites.fraunhofer.de\/video-dev\/mse-a-story-of-append-windows-presentation-timestamps-and-video-buffers\/","name":"MSE: A story of append windows, presentation timestamps and video buffers - Video-Dev","isPartOf":{"@id":"https:\/\/websites.fraunhofer.de\/video-dev\/#website"},"primaryImageOfPage":{"@id":"https:\/\/websites.fraunhofer.de\/video-dev\/mse-a-story-of-append-windows-presentation-timestamps-and-video-buffers\/#primaryimage"},"image":{"@id":"https:\/\/websites.fraunhofer.de\/video-dev\/mse-a-story-of-append-windows-presentation-timestamps-and-video-buffers\/#primaryimage"},"thumbnailUrl":"https:\/\/websites.fraunhofer.de\/video-dev\/wp-content\/uploads\/2021\/01\/appendWindowExample-1.png","datePublished":"2021-01-26T12:46:50+00:00","dateModified":"2021-01-26T12:46:51+00:00","author":{"@id":"https:\/\/websites.fraunhofer.de\/video-dev\/#\/schema\/person\/f7e1eee3cb4eae87a59648195014a809"},"description":"Examine how the position of Instant Decoder Refresh (IDR) frames and the earliest presentation time (EPT) of a video segment interact with the Media Source Extensions (MSE) attributes appendWindowStart, appendWindowEnd and timestampOffset.","breadcrumb":{"@id":"https:\/\/websites.fraunhofer.de\/video-dev\/mse-a-story-of-append-windows-presentation-timestamps-and-video-buffers\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/websites.fraunhofer.de\/video-dev\/mse-a-story-of-append-windows-presentation-timestamps-and-video-buffers\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/websites.fraunhofer.de\/video-dev\/mse-a-story-of-append-windows-presentation-timestamps-and-video-buffers\/#primaryimage","url":"https:\/\/websites.fraunhofer.de\/video-dev\/wp-content\/uploads\/2021\/01\/appendWindowExample-1.png","contentUrl":"https:\/\/websites.fraunhofer.de\/video-dev\/wp-content\/uploads\/2021\/01\/appendWindowExample-1.png","width":458,"height":121},{"@type":"BreadcrumbList","@id":"https:\/\/websites.fraunhofer.de\/video-dev\/mse-a-story-of-append-windows-presentation-timestamps-and-video-buffers\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/websites.fraunhofer.de\/video-dev\/"},{"@type":"ListItem","position":2,"name":"MSE: A story of append windows, presentation timestamps and video buffers"}]},{"@type":"WebSite","@id":"https:\/\/websites.fraunhofer.de\/video-dev\/#website","url":"https:\/\/websites.fraunhofer.de\/video-dev\/","name":"Video-Dev","description":"Future Applications and Media - Video Development Blog","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/websites.fraunhofer.de\/video-dev\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/websites.fraunhofer.de\/video-dev\/#\/schema\/person\/f7e1eee3cb4eae87a59648195014a809","name":"Daniel Silhavy","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/websites.fraunhofer.de\/video-dev\/wp-content\/uploads\/2019\/11\/0-1-150x150.jpegccb13c1b60303228bf3c575f3345fe29","url":"https:\/\/websites.fraunhofer.de\/video-dev\/wp-content\/uploads\/2019\/11\/0-1-150x150.jpeg","contentUrl":"https:\/\/websites.fraunhofer.de\/video-dev\/wp-content\/uploads\/2019\/11\/0-1-150x150.jpeg","caption":"Daniel Silhavy"},"description":"Daniel Silhavy studied Computer Science at the Technical University of Berlin (TUB). He received his Masters degree with the completion of his thesis \u201cAd Insertion in MPEG-DASH\u201d at Fraunhofer Institute for Open Communication Systems (FOKUS) in 2015. Currently, he is employed as a scientific assistant and project manager at the Business Unit Future Applications and Media (FAME). He specializes in the R&amp;D of topics dealing with IPTV and adaptive media streaming.","sameAs":["https:\/\/www.fokus.fraunhofer.de\/fame\/team\/silhavy","https:\/\/www.linkedin.com\/in\/daniel-silhavy-21650a129\/","https:\/\/x.com\/dsilhavy"],"url":"https:\/\/websites.fraunhofer.de\/video-dev\/author\/silhavy\/"}]}},"_links":{"self":[{"href":"https:\/\/websites.fraunhofer.de\/video-dev\/wp-json\/wp\/v2\/posts\/844","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/websites.fraunhofer.de\/video-dev\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/websites.fraunhofer.de\/video-dev\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/websites.fraunhofer.de\/video-dev\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/websites.fraunhofer.de\/video-dev\/wp-json\/wp\/v2\/comments?post=844"}],"version-history":[{"count":35,"href":"https:\/\/websites.fraunhofer.de\/video-dev\/wp-json\/wp\/v2\/posts\/844\/revisions"}],"predecessor-version":[{"id":902,"href":"https:\/\/websites.fraunhofer.de\/video-dev\/wp-json\/wp\/v2\/posts\/844\/revisions\/902"}],"wp:attachment":[{"href":"https:\/\/websites.fraunhofer.de\/video-dev\/wp-json\/wp\/v2\/media?parent=844"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/websites.fraunhofer.de\/video-dev\/wp-json\/wp\/v2\/categories?post=844"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/websites.fraunhofer.de\/video-dev\/wp-json\/wp\/v2\/tags?post=844"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/websites.fraunhofer.de\/video-dev\/wp-json\/wp\/v2\/coauthors?post=844"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}