{"id":794,"date":"2020-12-17T08:57:20","date_gmt":"2020-12-17T07:57:20","guid":{"rendered":"https:\/\/websites.fraunhofer.de\/video-dev\/?p=794"},"modified":"2020-12-17T08:57:22","modified_gmt":"2020-12-17T07:57:22","slug":"deep-encode-part-ii","status":"publish","type":"post","link":"https:\/\/websites.fraunhofer.de\/video-dev\/deep-encode-part-ii\/","title":{"rendered":"Deep Encode: Part II"},"content":{"rendered":"\n<p>Welcome back to the Deep Encode series!<\/p>\n\n\n\n<p>In the previous <a href=\"https:\/\/websites.fraunhofer.de\/video-dev\/deep-encode-part-i\/\">post<\/a>, we introduced the entire FAMIUM Deep Encode project and touched upon different aspects and components of the architecture, including a sneak peek into per-scene encoding. In this post, we will dive deep into the \u2018complexity analysis\u2019 portion of the project, in which we will introduce the workers and API\u2019S that have been developed to support the video analysis.<\/p>\n\n\n\n<p>A large part of this complexity analysis development involved heavy research into the field of computer vision. While there are several metadata attributes that influence the overall video encoding quality (such as bitrates, resolution size, etc.), we wanted to take a few steps further and look into actual video feature algorithms that can enhance the machine learning models. As stated in the previous post, values resulting from the complexity analysis are sent to a machine learning model in order to predict an optimal encoding ladder.&nbsp;<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">So, what&#8217;s been happening?<\/h2>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"698\" height=\"211\" src=\"https:\/\/websites.fraunhofer.de\/video-dev\/wp-content\/uploads\/2020\/10\/complexity-1-698x211.png\" alt=\"\" class=\"wp-image-805\" srcset=\"https:\/\/websites.fraunhofer.de\/video-dev\/wp-content\/uploads\/2020\/10\/complexity-1-698x211.png 698w, https:\/\/websites.fraunhofer.de\/video-dev\/wp-content\/uploads\/2020\/10\/complexity-1-400x121.png 400w, https:\/\/websites.fraunhofer.de\/video-dev\/wp-content\/uploads\/2020\/10\/complexity-1-768x232.png 768w, https:\/\/websites.fraunhofer.de\/video-dev\/wp-content\/uploads\/2020\/10\/complexity-1-1536x464.png 1536w, https:\/\/websites.fraunhofer.de\/video-dev\/wp-content\/uploads\/2020\/10\/complexity-1.png 1776w\" sizes=\"auto, (max-width: 698px) 100vw, 698px\" \/><\/figure>\n\n\n\n<p>The complexity analysis is made up of several components:&nbsp;<br><br><strong>General metadata extraction: <\/strong><br>This worker extracts overall source video data, such as resolution size, bitrate, codec ID, Group of Pictures (GoP) size, etc.<br><br><strong>Characteristics:<\/strong> <br>This worker extracts frames from the source video and produces histogram representatives of the frames, calculating its pixel intensity, image entropy and energy. <\/p>\n\n\n\n<figure class=\"wp-block-image size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/websites.fraunhofer.de\/video-dev\/wp-content\/uploads\/2020\/10\/orig_hist_kmeans-1.png\" alt=\"\" class=\"wp-image-808\" width=\"481\" height=\"318\" srcset=\"https:\/\/websites.fraunhofer.de\/video-dev\/wp-content\/uploads\/2020\/10\/orig_hist_kmeans-1.png 413w, https:\/\/websites.fraunhofer.de\/video-dev\/wp-content\/uploads\/2020\/10\/orig_hist_kmeans-1-400x264.png 400w\" sizes=\"auto, (max-width: 481px) 100vw, 481px\" \/><figcaption>Frame 0: color spread histogram<\/figcaption><\/figure>\n\n\n\n<p>For this worker, frames are extracted from the source video, and histograms are constructed, separating each of the color channels (red, green, blue). Additionally, the histograms are calculated for its level of &#8216;darkness&#8217; and resulting standard deviation values.<br><br><strong>Spatial and temporal algorithms: <\/strong><br>Like the Characteristics worker, the Spatial\/Temporal information worker also extracts frames from the source video. Once frames are extracted, they are analyzed by spatial and temporal algorithms. Spatial information describes the relative position of an object in a video. The higher the spatial value, the more bitrate is needed. Temporal information, on the other hand, describes the motion difference between pixel values. A higher temporal value means that there is more motion in the adjacent frames. <br><br><strong>Scene detection: <\/strong><br>Thus far, we\u2019ve been working with 2 scene detection tools: FFMPEG and PySceneDetect, both of which have several advantages and disadvantages when it comes to detecting and extracting scenes. FFMPEG is an open source tool for all things media encoding-related (recording, converting, streaming, etc.) This tool detects scene changes based on a predetermined threshold between [0, 1]. PySceneDetect is a Python library that uses FFMPEG\/FFPROBE as a basis for scene detection, and detects scene changes based on a predetermined threshold between [0, 100]. As a note, for the per-title encoding workflow, the scene detection worker is used as part of the complexity analysis, whereas it is removed for the per-<em>scene<\/em> encoding workflow. <\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"698\" height=\"180\" src=\"https:\/\/websites.fraunhofer.de\/video-dev\/wp-content\/uploads\/2020\/10\/Screen-Shot-2020-10-29-at-5.58.08-PM-698x180.png\" alt=\"\" class=\"wp-image-807\" srcset=\"https:\/\/websites.fraunhofer.de\/video-dev\/wp-content\/uploads\/2020\/10\/Screen-Shot-2020-10-29-at-5.58.08-PM-698x180.png 698w, https:\/\/websites.fraunhofer.de\/video-dev\/wp-content\/uploads\/2020\/10\/Screen-Shot-2020-10-29-at-5.58.08-PM-400x103.png 400w, https:\/\/websites.fraunhofer.de\/video-dev\/wp-content\/uploads\/2020\/10\/Screen-Shot-2020-10-29-at-5.58.08-PM-768x198.png 768w, https:\/\/websites.fraunhofer.de\/video-dev\/wp-content\/uploads\/2020\/10\/Screen-Shot-2020-10-29-at-5.58.08-PM.png 1070w\" sizes=\"auto, (max-width: 698px) 100vw, 698px\" \/><figcaption>Detected timestamps<\/figcaption><\/figure>\n\n\n\n<p><br><strong>Classification<\/strong>:<br>This worker uses MobileNet, a classification model that uses the convolutional neural network architecture (yes, we have models on top of models). It assigns 1 or more categorical labels to the source video.&nbsp;For example, if Big Buck Bunny was being categorized, it would most likely be labeled with \u2018animation\u2019, \u2018bunny\u2019, etc. <br><br>A majority of the components were initially developed as Python API&#8217;s and then integrated as web workers &#8211; these workers are then \u2018triggered\u2019 by an incoming source video. <br><br><strong>What happens with these workers? <\/strong>These workers produce \u2018virtual encodes\u2019 &#8211; our substitution for the actual test encodes that are produced with the conventional per-title encoding method. The virtual encodes consist of values derived from the complexity analysis, which are then sent to a selected machine learning model: <\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>{\n   \"clip\":{         \n\"s_video_id\":\"\/home\/fame\/pte\/storage\/content\/upload\/1m_big_buck_bunny.mp4\",\n      \"s_width\":640,\n      \"s_height\":360,\n      \"s_storage_size\":5510872,\n      \"s_duration\":60,\n      \"s_scan_type\":\"progressive\",\n      \"c_content_category\":\"coral reef\",\n      \"c_si\":63.149,\n      \"c_ti\":6.916,\n      \"c_scene_change_ffmpeg_ratio30\":6,\n      \"c_scene_change_ffmpeg_ratio60\":3,\n      \"c_scene_change_ffmpeg_ratio90\":1,\n...\n   \"encodes\":&#91;\n   {\n      \"e_aspect_ratio\":\"2:1\",\n      \"e_pixel_aspect_ratio\":\"1:1\",\n      \"e_codec\":\"h264\",\n      \"e_codec_profile\":\"main\",\n      \"e_codec_level\":31,\n      \"e_framerate\":\"25\",\n      \"e_ref_frame_count\":1,\n      \"e_scan_type\":\"progressive\",\n      \"e_bit_depth\":8,\n      \"e_pixel_fmt\":\"yuv420p\",\n      \"e_b_frame_int\":1,\n      \"e_gop_size\":60,\n      \"width\":3840,\n      \"height\":2160,\n      \"e_width\":3840,\n      \"e_height\":2160,\n      \"e_average_bitrate\":250\n   }\n...\n]\n}<\/code><\/pre>\n\n\n\n<p>Mind you, the effort behind this research and development did not happen overnight. We conducted extensive research and development into the field of Computer Vision \u2013 looking into different video feature detection algorithm and determining how closely correlated a certain algorithm can impact video quality.<\/p>\n\n\n\n<p><strong>Pitfalls:<\/strong><br>The JSON code snippet above shows only a portion of the data that is sent to the selected machine learning model. As you can see, it is A LOT. A large part of the workflow (in terms of timing) consists of the complexity analysis component \u2013 depending on the video file size, the workers can\/may take a longer time in computing the complexity. Several adjustments were made to the workers in order to speed up the entire workflow (from source video input, to downloading production encodes). <\/p>\n\n\n\n<p>The Characteristics worker, as mentioned, extracts frames from a video, then performs its relevant calculations. However, at 1fps, for a video that lasts over 10 minutes, this practice is not efficient. Therefore, this configuration was adjusted to extract frames at 1 frame per <em>60<\/em> seconds, which allowed the worker to compute video characteristics at a faster rate.<\/p>\n\n\n\n<p>The Scene Detection worker (PySceneDetect) also tends to perform slower (in terms of the overall complexity analysis timing) with larger video files and threshold values that detects more scenes. To overcome the latter, we&#8217;ve set a &#8216;default threshold&#8217; for this worker, so fewer, but significant scenes are detected.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Wait, there&#8217;s more&#8230;<\/h2>\n\n\n\n<p>In addition to the complexity analysis, we\u2019ve developed additional API\u2019s and workers in order to enhance\/automate the existing architecture. Here are a few highlights:<br><br><strong>Encoding ladder:<\/strong> <br>In order to determine the optimal encoding ladder out of 30+ bitrate\/VMAF predictions, we&#8217;ve developed an API that interpolates through the results and produces ideal bitrate\/VMAF pairs per resolution.<br><br><strong>Live streaming: <\/strong><br>To support live streaming input, separate components were developed so that fixed-length scenes are extracted from the streaming URL prior to the complexity analysis. <br><br><strong>Data Export:<\/strong><br>This functionality utilizes all videos in our repository to develop data from the virtual encodes. A CSV is produced as a result, and used as training data for the machine learning models.<br><br><strong>Production Encoding:<\/strong><br>This feature enables and produces the actual encodes that result from the encoding ladder. <\/p>\n\n\n\n<h2 class=\"wp-block-heading\">What&#8217;s next?<\/h2>\n\n\n\n<p>As mentioned in the first article, we are currently looking into per-scene encoding. Currently in development, is an additional worker that detects and records scene timestamps in the database, as well as extract scenes to be sent to the complexity analysis components. <\/p>\n\n\n\n<p>Also in development are two workers for the complexity analysis: color quantization and principal component analysis. These algorithms have been tweaked to support and calculate significant values that correlate with video complexity and quality.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Conclusion<\/h2>\n\n\n\n<p>In this blog post, we dove deep into the workers and video complexity analysis of FAMIUM Deep Encode, as well as additional API&#8217;s that enhance our overall architecture. Stay tuned for Deep Encode, Part III!<\/p>\n\n\n\n<p>If you have any questions in regards to our Deep Encode activities, please check out our <a href=\"https:\/\/www.fokus.fraunhofer.de\/en\/fame\/deep-encode\">website<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Welcome back to the Deep Encode series! In the previous post, we introduced the entire FAMIUM Deep Encode project and touched upon different aspects and components of the architecture, including a sneak peek into per-scene encoding. In this post, we&#8230;<\/p>\n","protected":false},"author":11,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[13,33,32,35,14],"tags":[37,36,39],"coauthors":[24],"class_list":["post-794","post","type-post","status-publish","format-standard","hentry","category-encoding","category-per-scene-encoding","category-per-title-encoding","category-video-encoding","category-video-quality-metrics","tag-per-scene-encoding","tag-per-title-encoding","tag-video-encoding"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Deep Encode: Part II - Video-Dev<\/title>\n<meta name=\"description\" content=\"Welcome back to the Deep Encode series! In this post, we will dive deep into the \u2018complexity analysis\u2019 portion of the project, in which we will introduce the workers and API\u2019S that have been developed to support the video analysis.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/websites.fraunhofer.de\/video-dev\/deep-encode-part-ii\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Deep Encode: Part II - Video-Dev\" \/>\n<meta property=\"og:description\" content=\"Welcome back to the Deep Encode series! In this post, we will dive deep into the \u2018complexity analysis\u2019 portion of the project, in which we will introduce the workers and API\u2019S that have been developed to support the video analysis.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/websites.fraunhofer.de\/video-dev\/deep-encode-part-ii\/\" \/>\n<meta property=\"og:site_name\" content=\"Video-Dev\" \/>\n<meta property=\"article:published_time\" content=\"2020-12-17T07:57:20+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2020-12-17T07:57:22+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/websites.fraunhofer.de\/video-dev\/wp-content\/uploads\/2020\/10\/complexity-1-698x211.png\" \/>\n<meta name=\"author\" content=\"Anita Chen\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Anita Chen\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/deep-encode-part-ii\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/deep-encode-part-ii\\\/\"},\"author\":{\"name\":\"Anita Chen\",\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/#\\\/schema\\\/person\\\/66a433660483086bca16d8f94bc0e5c1\"},\"headline\":\"Deep Encode: Part II\",\"datePublished\":\"2020-12-17T07:57:20+00:00\",\"dateModified\":\"2020-12-17T07:57:22+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/deep-encode-part-ii\\\/\"},\"wordCount\":1095,\"commentCount\":0,\"image\":{\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/deep-encode-part-ii\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/wp-content\\\/uploads\\\/2020\\\/10\\\/complexity-1-698x211.png\",\"keywords\":[\"per-scene encoding\",\"per-title encoding\",\"video encoding\"],\"articleSection\":[\"Encoding\",\"Per-Scene Encoding\",\"Per-Title Encoding\",\"video encoding\",\"Video Quality Metrics\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/deep-encode-part-ii\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/deep-encode-part-ii\\\/\",\"url\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/deep-encode-part-ii\\\/\",\"name\":\"Deep Encode: Part II - Video-Dev\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/deep-encode-part-ii\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/deep-encode-part-ii\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/wp-content\\\/uploads\\\/2020\\\/10\\\/complexity-1-698x211.png\",\"datePublished\":\"2020-12-17T07:57:20+00:00\",\"dateModified\":\"2020-12-17T07:57:22+00:00\",\"author\":{\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/#\\\/schema\\\/person\\\/66a433660483086bca16d8f94bc0e5c1\"},\"description\":\"Welcome back to the Deep Encode series! In this post, we will dive deep into the \u2018complexity analysis\u2019 portion of the project, in which we will introduce the workers and API\u2019S that have been developed to support the video analysis.\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/deep-encode-part-ii\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/deep-encode-part-ii\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/deep-encode-part-ii\\\/#primaryimage\",\"url\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/wp-content\\\/uploads\\\/2020\\\/10\\\/complexity-1.png\",\"contentUrl\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/wp-content\\\/uploads\\\/2020\\\/10\\\/complexity-1.png\",\"width\":1776,\"height\":536},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/deep-encode-part-ii\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Deep Encode: Part II\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/#website\",\"url\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/\",\"name\":\"Video-Dev\",\"description\":\"Future Applications and Media - Video Development Blog\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/#\\\/schema\\\/person\\\/66a433660483086bca16d8f94bc0e5c1\",\"name\":\"Anita Chen\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/ec50fa1c488b53b24ac247a1a683afc2d2c7a34b2bab56ccbb2e7bc956449778?s=96&d=mm&r=g08ee5d73485bba7d6a61df5e1da32ff1\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/ec50fa1c488b53b24ac247a1a683afc2d2c7a34b2bab56ccbb2e7bc956449778?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/ec50fa1c488b53b24ac247a1a683afc2d2c7a34b2bab56ccbb2e7bc956449778?s=96&d=mm&r=g\",\"caption\":\"Anita Chen\"},\"url\":\"https:\\\/\\\/websites.fraunhofer.de\\\/video-dev\\\/author\\\/chb\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Deep Encode: Part II - Video-Dev","description":"Welcome back to the Deep Encode series! In this post, we will dive deep into the \u2018complexity analysis\u2019 portion of the project, in which we will introduce the workers and API\u2019S that have been developed to support the video analysis.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/websites.fraunhofer.de\/video-dev\/deep-encode-part-ii\/","og_locale":"en_US","og_type":"article","og_title":"Deep Encode: Part II - Video-Dev","og_description":"Welcome back to the Deep Encode series! In this post, we will dive deep into the \u2018complexity analysis\u2019 portion of the project, in which we will introduce the workers and API\u2019S that have been developed to support the video analysis.","og_url":"https:\/\/websites.fraunhofer.de\/video-dev\/deep-encode-part-ii\/","og_site_name":"Video-Dev","article_published_time":"2020-12-17T07:57:20+00:00","article_modified_time":"2020-12-17T07:57:22+00:00","og_image":[{"url":"https:\/\/websites.fraunhofer.de\/video-dev\/wp-content\/uploads\/2020\/10\/complexity-1-698x211.png","type":"","width":"","height":""}],"author":"Anita Chen","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Anita Chen","Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/websites.fraunhofer.de\/video-dev\/deep-encode-part-ii\/#article","isPartOf":{"@id":"https:\/\/websites.fraunhofer.de\/video-dev\/deep-encode-part-ii\/"},"author":{"name":"Anita Chen","@id":"https:\/\/websites.fraunhofer.de\/video-dev\/#\/schema\/person\/66a433660483086bca16d8f94bc0e5c1"},"headline":"Deep Encode: Part II","datePublished":"2020-12-17T07:57:20+00:00","dateModified":"2020-12-17T07:57:22+00:00","mainEntityOfPage":{"@id":"https:\/\/websites.fraunhofer.de\/video-dev\/deep-encode-part-ii\/"},"wordCount":1095,"commentCount":0,"image":{"@id":"https:\/\/websites.fraunhofer.de\/video-dev\/deep-encode-part-ii\/#primaryimage"},"thumbnailUrl":"https:\/\/websites.fraunhofer.de\/video-dev\/wp-content\/uploads\/2020\/10\/complexity-1-698x211.png","keywords":["per-scene encoding","per-title encoding","video encoding"],"articleSection":["Encoding","Per-Scene Encoding","Per-Title Encoding","video encoding","Video Quality Metrics"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/websites.fraunhofer.de\/video-dev\/deep-encode-part-ii\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/websites.fraunhofer.de\/video-dev\/deep-encode-part-ii\/","url":"https:\/\/websites.fraunhofer.de\/video-dev\/deep-encode-part-ii\/","name":"Deep Encode: Part II - Video-Dev","isPartOf":{"@id":"https:\/\/websites.fraunhofer.de\/video-dev\/#website"},"primaryImageOfPage":{"@id":"https:\/\/websites.fraunhofer.de\/video-dev\/deep-encode-part-ii\/#primaryimage"},"image":{"@id":"https:\/\/websites.fraunhofer.de\/video-dev\/deep-encode-part-ii\/#primaryimage"},"thumbnailUrl":"https:\/\/websites.fraunhofer.de\/video-dev\/wp-content\/uploads\/2020\/10\/complexity-1-698x211.png","datePublished":"2020-12-17T07:57:20+00:00","dateModified":"2020-12-17T07:57:22+00:00","author":{"@id":"https:\/\/websites.fraunhofer.de\/video-dev\/#\/schema\/person\/66a433660483086bca16d8f94bc0e5c1"},"description":"Welcome back to the Deep Encode series! In this post, we will dive deep into the \u2018complexity analysis\u2019 portion of the project, in which we will introduce the workers and API\u2019S that have been developed to support the video analysis.","breadcrumb":{"@id":"https:\/\/websites.fraunhofer.de\/video-dev\/deep-encode-part-ii\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/websites.fraunhofer.de\/video-dev\/deep-encode-part-ii\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/websites.fraunhofer.de\/video-dev\/deep-encode-part-ii\/#primaryimage","url":"https:\/\/websites.fraunhofer.de\/video-dev\/wp-content\/uploads\/2020\/10\/complexity-1.png","contentUrl":"https:\/\/websites.fraunhofer.de\/video-dev\/wp-content\/uploads\/2020\/10\/complexity-1.png","width":1776,"height":536},{"@type":"BreadcrumbList","@id":"https:\/\/websites.fraunhofer.de\/video-dev\/deep-encode-part-ii\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/websites.fraunhofer.de\/video-dev\/"},{"@type":"ListItem","position":2,"name":"Deep Encode: Part II"}]},{"@type":"WebSite","@id":"https:\/\/websites.fraunhofer.de\/video-dev\/#website","url":"https:\/\/websites.fraunhofer.de\/video-dev\/","name":"Video-Dev","description":"Future Applications and Media - Video Development Blog","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/websites.fraunhofer.de\/video-dev\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/websites.fraunhofer.de\/video-dev\/#\/schema\/person\/66a433660483086bca16d8f94bc0e5c1","name":"Anita Chen","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/ec50fa1c488b53b24ac247a1a683afc2d2c7a34b2bab56ccbb2e7bc956449778?s=96&d=mm&r=g08ee5d73485bba7d6a61df5e1da32ff1","url":"https:\/\/secure.gravatar.com\/avatar\/ec50fa1c488b53b24ac247a1a683afc2d2c7a34b2bab56ccbb2e7bc956449778?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/ec50fa1c488b53b24ac247a1a683afc2d2c7a34b2bab56ccbb2e7bc956449778?s=96&d=mm&r=g","caption":"Anita Chen"},"url":"https:\/\/websites.fraunhofer.de\/video-dev\/author\/chb\/"}]}},"_links":{"self":[{"href":"https:\/\/websites.fraunhofer.de\/video-dev\/wp-json\/wp\/v2\/posts\/794","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/websites.fraunhofer.de\/video-dev\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/websites.fraunhofer.de\/video-dev\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/websites.fraunhofer.de\/video-dev\/wp-json\/wp\/v2\/users\/11"}],"replies":[{"embeddable":true,"href":"https:\/\/websites.fraunhofer.de\/video-dev\/wp-json\/wp\/v2\/comments?post=794"}],"version-history":[{"count":10,"href":"https:\/\/websites.fraunhofer.de\/video-dev\/wp-json\/wp\/v2\/posts\/794\/revisions"}],"predecessor-version":[{"id":831,"href":"https:\/\/websites.fraunhofer.de\/video-dev\/wp-json\/wp\/v2\/posts\/794\/revisions\/831"}],"wp:attachment":[{"href":"https:\/\/websites.fraunhofer.de\/video-dev\/wp-json\/wp\/v2\/media?parent=794"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/websites.fraunhofer.de\/video-dev\/wp-json\/wp\/v2\/categories?post=794"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/websites.fraunhofer.de\/video-dev\/wp-json\/wp\/v2\/tags?post=794"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/websites.fraunhofer.de\/video-dev\/wp-json\/wp\/v2\/coauthors?post=794"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}