{"id":1291,"date":"2022-03-12T15:04:44","date_gmt":"2022-03-12T14:04:44","guid":{"rendered":"https:\/\/websites.fraunhofer.de\/video-dev\/?p=1291"},"modified":"2022-03-12T15:04:45","modified_gmt":"2022-03-12T14:04:45","slug":"to-understand-is-to-perceive-patterns","status":"publish","type":"post","link":"https:\/\/websites.fraunhofer.de\/video-dev\/to-understand-is-to-perceive-patterns\/","title":{"rendered":"To understand is to perceive &#60;Patterns>"},"content":{"rendered":"\n<p>Most people in the streaming industry probably watch live streams a bit different from \u201cnormal\u201d people. Personally, I am always curious about the formats and settings the streaming providers use and if there is anything special to be found. When I was watching the UEFA Champions League game between Bayern Munich and Salzburg I took the chance to look into the Amazon Prime DASH manifest and found something interesting.<\/p>\n\n\n\n<p>For Chrome Desktop browsers, Amazon uses the DASH format with a video segment duration of exactly two seconds:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>&lt;SegmentTemplate timescale=\"25000\"&gt;\n\n&lt;SegmentTimeline&gt;\n    &lt;S d=\"50000\" r=\"853\" t=\"110492081250\"\/&gt;\n&lt;\/SegmentTimeline&gt;<\/code><\/pre>\n\n\n\n<p>So far so good, nothing special here. The only interesting thing here is that they use a &nbsp;<code>&lt;SegmentTimeline&gt;<\/code><em> <\/em>signaling with an <code>@r<\/code> attribute.  In<a href=\"https:\/\/websites.fraunhofer.de\/video-dev\/why-and-how-to-align-media-segments-for-abr-streaming\/\" target=\"_blank\" rel=\"noreferrer noopener\"> Why and how to align media segments for ABR streaming<\/a> we discussed ways to align the duration of audio and video segments. One of the main advantages we gain from this alignment is the reduced DASH manifest size. We can easily use a <code>&lt;SegmentTemplate&gt; <\/code>mechanism or use the <code>@r<\/code> parameter together with <code>&lt;SegmentTimeline&gt;<\/code> for <strong>both audio and video at the same time.<\/strong>&nbsp;However, there are only certain combinations of video framerate, audio sampling rate and segment duration for which audio and video segments are aligned. Two seconds is not one of them. Consequently, the audio segments in the Amazon stream must have some kind of oscillating duration pattern. And this is where we come to the interesting part.&nbsp;<\/p>\n\n\n\n<p>To avoid large manifest files with a repeating <code>&lt;SegmentTimeline&gt;<\/code> duration pattern Amazon simply introduced a new <em>&lt;Pattern&gt;<\/em> element:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>&lt;SegmentTemplate timescale=\"48000\"&gt;\n    &lt;SegmentTimeline&gt;\n        &lt;S d=\"96256\" r=\"2\" t=\"212144796192\"\/&gt;\n       <strong> &lt;Pattern r=\"211\" t=\"212145084960\"&gt;\n            &lt;S d=\"95232\"\/&gt;\n            &lt;S d=\"96256\" r=\"2\"\/&gt;\n        &lt;\/Pattern&gt;<\/strong>\n        &lt;S d=\"95232\" t=\"212226492960\"\/&gt;\n        &lt;S d=\"96256\" r=\"1\" t=\"212226588192\"\/&gt;\n    &lt;\/SegmentTimeline&gt;\n\n<\/code><\/pre>\n\n\n\n<p>The <code>&lt;Pattern&gt;<\/code> element has an <em><code>@r<\/code> <\/em>counter similar to what we know from the <code>&lt;S&gt;<\/code> element. In addition, the <code>@t<\/code> attribute serves as a timing anchor. Inside the<code> &lt;Pattern&gt;<\/code> element we find the <code>&lt;S&gt;<\/code> element structure we know from \u201cstandard compliant\u201d DASH streaming. <\/p>\n\n\n\n<p>I remember that we had a similar discussion about <code>&lt;Pattern><\/code> elements in DASH-IF some years ago based on a <a href=\"https:\/\/members.dashif.org\/wg\/DASH\/document\/218\" target=\"_blank\" rel=\"noreferrer noopener\">presentation<\/a> by AWS Elemental. To my best knowledge, this never made it into the MPEG specification or the DASH-IF IOP guidelines, as it would break backwards compatibility. It is interesting to see that the industry saw a need for it and implemented it accordingly. Obviously, Amazon has most certainly full control over the encoder, the packager and most of the platforms they are available on. Maybe the discussion will come up in MPEG or DASH-IF again, and we will see a similar <code>&lt;Pattern> <\/code>approach also in one of the next specifications. For minor differences between the segment durations a <code>&lt;SegmentTemplate> <\/code>approach instead of <code>&lt;SegmentTimeline> <\/code>with <code>&lt;Pattern><\/code> works as well. The DASH ISO specification allows up to 50% deviation between MPD start time and the segment&#8217;s presentation time. <\/p>\n\n\n\n<p>If you have any question regarding DASH streaming or <a href=\"https:\/\/github.com\/Dash-Industry-Forum\/dash.js\" target=\"_blank\" rel=\"noreferrer noopener\">dash.js<\/a> in particular, feel free to check out our&nbsp;<a href=\"https:\/\/www.fokus.fraunhofer.de\/go\/dash\">website<\/a>&nbsp;and contact us.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Most people in the streaming industry probably watch live streams a bit different from \u201cnormal\u201d people. Personally, I am always curious about the formats and settings the streaming providers use and if there is anything special to be found. When&#8230;<\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[],"coauthors":[6],"class_list":["post-1291","post","type-post","status-publish","format-standard","hentry","category-mpeg-dash"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.3 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>To understand is to perceive &#060;Patterns&gt; - Video-Dev<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/websites.fraunhofer.de\/video-dev\/to-understand-is-to-perceive-patterns\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"To understand is to perceive &#060;Patterns&gt; - Video-Dev\" \/>\n<meta property=\"og:description\" content=\"Most people in the streaming industry probably watch live streams a bit different from \u201cnormal\u201d people. Personally, I am always curious about the formats and settings the streaming providers use and if there is anything special to be found. When...\" \/>\n<meta property=\"og:url\" content=\"https:\/\/websites.fraunhofer.de\/video-dev\/to-understand-is-to-perceive-patterns\/\" \/>\n<meta property=\"og:site_name\" content=\"Video-Dev\" \/>\n<meta property=\"article:published_time\" content=\"2022-03-12T14:04:44+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2022-03-12T14:04:45+00:00\" \/>\n<meta name=\"author\" content=\"Daniel Silhavy\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@dsilhavy\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Daniel Silhavy\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/to-understand-is-to-perceive-patterns\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/to-understand-is-to-perceive-patterns\\\/\"},\"author\":{\"name\":\"Daniel Silhavy\",\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/#\\\/schema\\\/person\\\/f7e1eee3cb4eae87a59648195014a809\"},\"headline\":\"To understand is to perceive &#60;Patterns>\",\"datePublished\":\"2022-03-12T14:04:44+00:00\",\"dateModified\":\"2022-03-12T14:04:45+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/to-understand-is-to-perceive-patterns\\\/\"},\"wordCount\":455,\"commentCount\":0,\"articleSection\":[\"MPEG-DASH\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/to-understand-is-to-perceive-patterns\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/to-understand-is-to-perceive-patterns\\\/\",\"url\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/to-understand-is-to-perceive-patterns\\\/\",\"name\":\"To understand is to perceive &#60;Patterns> - Video-Dev\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/#website\"},\"datePublished\":\"2022-03-12T14:04:44+00:00\",\"dateModified\":\"2022-03-12T14:04:45+00:00\",\"author\":{\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/#\\\/schema\\\/person\\\/f7e1eee3cb4eae87a59648195014a809\"},\"breadcrumb\":{\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/to-understand-is-to-perceive-patterns\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/to-understand-is-to-perceive-patterns\\\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/to-understand-is-to-perceive-patterns\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"To understand is to perceive &#60;Patterns>\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/#website\",\"url\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/\",\"name\":\"Video-Dev\",\"description\":\"Future Applications and Media - Video Development Blog\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/#\\\/schema\\\/person\\\/f7e1eee3cb4eae87a59648195014a809\",\"name\":\"Daniel Silhavy\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/wp-content\\\/uploads\\\/2019\\\/11\\\/0-1-150x150.jpegccb13c1b60303228bf3c575f3345fe29\",\"url\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/wp-content\\\/uploads\\\/2019\\\/11\\\/0-1-150x150.jpeg\",\"contentUrl\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/wp-content\\\/uploads\\\/2019\\\/11\\\/0-1-150x150.jpeg\",\"caption\":\"Daniel Silhavy\"},\"description\":\"Daniel Silhavy studied Computer Science at the Technical University of Berlin (TUB). He received his Masters degree with the completion of his thesis \u201cAd Insertion in MPEG-DASH\u201d at Fraunhofer Institute for Open Communication Systems (FOKUS) in 2015. Currently, he is employed as a scientific assistant and project manager at the Business Unit Future Applications and Media (FAME). He specializes in the R&amp;D of topics dealing with IPTV and adaptive media streaming.\",\"sameAs\":[\"https:\\\/\\\/www.fokus.fraunhofer.de\\\/fame\\\/team\\\/silhavy\",\"https:\\\/\\\/www.linkedin.com\\\/in\\\/daniel-silhavy-21650a129\\\/\",\"https:\\\/\\\/x.com\\\/dsilhavy\"],\"url\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/author\\\/silhavy\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"To understand is to perceive &#60;Patterns> - Video-Dev","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/websites.fraunhofer.de\/video-dev\/to-understand-is-to-perceive-patterns\/","og_locale":"en_US","og_type":"article","og_title":"To understand is to perceive &#60;Patterns> - Video-Dev","og_description":"Most people in the streaming industry probably watch live streams a bit different from \u201cnormal\u201d people. Personally, I am always curious about the formats and settings the streaming providers use and if there is anything special to be found. When...","og_url":"https:\/\/websites.fraunhofer.de\/video-dev\/to-understand-is-to-perceive-patterns\/","og_site_name":"Video-Dev","article_published_time":"2022-03-12T14:04:44+00:00","article_modified_time":"2022-03-12T14:04:45+00:00","author":"Daniel Silhavy","twitter_card":"summary_large_image","twitter_creator":"@dsilhavy","twitter_misc":{"Written by":"Daniel Silhavy","Est. reading time":"3 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/websites.fraunhofer.de\/video-dev\/to-understand-is-to-perceive-patterns\/#article","isPartOf":{"@id":"https:\/\/websites.fraunhofer.de\/video-dev\/to-understand-is-to-perceive-patterns\/"},"author":{"name":"Daniel Silhavy","@id":"https:\/\/websites.fraunhofer.de\/video-dev\/#\/schema\/person\/f7e1eee3cb4eae87a59648195014a809"},"headline":"To understand is to perceive &#60;Patterns>","datePublished":"2022-03-12T14:04:44+00:00","dateModified":"2022-03-12T14:04:45+00:00","mainEntityOfPage":{"@id":"https:\/\/websites.fraunhofer.de\/video-dev\/to-understand-is-to-perceive-patterns\/"},"wordCount":455,"commentCount":0,"articleSection":["MPEG-DASH"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/websites.fraunhofer.de\/video-dev\/to-understand-is-to-perceive-patterns\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/websites.fraunhofer.de\/video-dev\/to-understand-is-to-perceive-patterns\/","url":"https:\/\/websites.fraunhofer.de\/video-dev\/to-understand-is-to-perceive-patterns\/","name":"To understand is to perceive &#60;Patterns> - Video-Dev","isPartOf":{"@id":"https:\/\/websites.fraunhofer.de\/video-dev\/#website"},"datePublished":"2022-03-12T14:04:44+00:00","dateModified":"2022-03-12T14:04:45+00:00","author":{"@id":"https:\/\/websites.fraunhofer.de\/video-dev\/#\/schema\/person\/f7e1eee3cb4eae87a59648195014a809"},"breadcrumb":{"@id":"https:\/\/websites.fraunhofer.de\/video-dev\/to-understand-is-to-perceive-patterns\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/websites.fraunhofer.de\/video-dev\/to-understand-is-to-perceive-patterns\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/websites.fraunhofer.de\/video-dev\/to-understand-is-to-perceive-patterns\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/websites.fraunhofer.de\/video-dev\/"},{"@type":"ListItem","position":2,"name":"To understand is to perceive &#60;Patterns>"}]},{"@type":"WebSite","@id":"https:\/\/websites.fraunhofer.de\/video-dev\/#website","url":"https:\/\/websites.fraunhofer.de\/video-dev\/","name":"Video-Dev","description":"Future Applications and Media - Video Development Blog","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/websites.fraunhofer.de\/video-dev\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/websites.fraunhofer.de\/video-dev\/#\/schema\/person\/f7e1eee3cb4eae87a59648195014a809","name":"Daniel Silhavy","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/websites.fraunhofer.de\/video-dev\/wp-content\/uploads\/2019\/11\/0-1-150x150.jpegccb13c1b60303228bf3c575f3345fe29","url":"https:\/\/websites.fraunhofer.de\/video-dev\/wp-content\/uploads\/2019\/11\/0-1-150x150.jpeg","contentUrl":"https:\/\/websites.fraunhofer.de\/video-dev\/wp-content\/uploads\/2019\/11\/0-1-150x150.jpeg","caption":"Daniel Silhavy"},"description":"Daniel Silhavy studied Computer Science at the Technical University of Berlin (TUB). He received his Masters degree with the completion of his thesis \u201cAd Insertion in MPEG-DASH\u201d at Fraunhofer Institute for Open Communication Systems (FOKUS) in 2015. Currently, he is employed as a scientific assistant and project manager at the Business Unit Future Applications and Media (FAME). He specializes in the R&amp;D of topics dealing with IPTV and adaptive media streaming.","sameAs":["https:\/\/www.fokus.fraunhofer.de\/fame\/team\/silhavy","https:\/\/www.linkedin.com\/in\/daniel-silhavy-21650a129\/","https:\/\/x.com\/dsilhavy"],"url":"https:\/\/websites.fraunhofer.de\/video-dev\/author\/silhavy\/"}]}},"_links":{"self":[{"href":"https:\/\/websites.fraunhofer.de\/video-dev\/wp-json\/wp\/v2\/posts\/1291","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/websites.fraunhofer.de\/video-dev\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/websites.fraunhofer.de\/video-dev\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/websites.fraunhofer.de\/video-dev\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/websites.fraunhofer.de\/video-dev\/wp-json\/wp\/v2\/comments?post=1291"}],"version-history":[{"count":15,"href":"https:\/\/websites.fraunhofer.de\/video-dev\/wp-json\/wp\/v2\/posts\/1291\/revisions"}],"predecessor-version":[{"id":1307,"href":"https:\/\/websites.fraunhofer.de\/video-dev\/wp-json\/wp\/v2\/posts\/1291\/revisions\/1307"}],"wp:attachment":[{"href":"https:\/\/websites.fraunhofer.de\/video-dev\/wp-json\/wp\/v2\/media?parent=1291"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/websites.fraunhofer.de\/video-dev\/wp-json\/wp\/v2\/categories?post=1291"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/websites.fraunhofer.de\/video-dev\/wp-json\/wp\/v2\/tags?post=1291"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/websites.fraunhofer.de\/video-dev\/wp-json\/wp\/v2\/coauthors?post=1291"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}