{"id":1728,"date":"2024-01-05T13:38:49","date_gmt":"2024-01-05T12:38:49","guid":{"rendered":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/?p=1728"},"modified":"2025-10-29T13:53:03","modified_gmt":"2025-10-29T12:53:03","slug":"annotation-concepts-in-mikaia","status":"publish","type":"post","link":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/annotation-concepts-in-mikaia\/","title":{"rendered":"Annotation Concepts in MIKAIA for Whole-Slide-Images"},"content":{"rendered":"\n<p>The first step to creating a supervised Digital Pathology AI is to annotate whole-slide-images. <a href=\"http:\/\/www.mikaia.ai\">MIKAIA<sup>\u00ae<\/sup><\/a> uses various annotation concepts, and understanding these concepts will make your experience with MIKAIA<sup>\u00ae<\/sup> much smoother and more efficient.<\/p>\n\n\n\n<div id=\"ez-toc-container\" class=\"ez-toc-v2_0_83 counter-hierarchy ez-toc-counter ez-toc-grey ez-toc-container-direction\">\n<div class=\"ez-toc-title-container\">\n<p class=\"ez-toc-title\" style=\"cursor:inherit\">Table of Contents<\/p>\n<span class=\"ez-toc-title-toggle\"><a href=\"#\" class=\"ez-toc-pull-right ez-toc-btn ez-toc-btn-xs ez-toc-btn-default ez-toc-toggle\" aria-label=\"Toggle Table of Content\"><span class=\"ez-toc-js-icon-con\"><span class=\"\"><span class=\"eztoc-hide\" style=\"display:none;\">Toggle<\/span><span class=\"ez-toc-icon-toggle-span\"><svg style=\"fill: #999;color:#999\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" class=\"list-377408\" width=\"20px\" height=\"20px\" viewBox=\"0 0 24 24\" fill=\"none\"><path d=\"M6 6H4v2h2V6zm14 0H8v2h12V6zM4 11h2v2H4v-2zm16 0H8v2h12v-2zM4 16h2v2H4v-2zm16 0H8v2h12v-2z\" fill=\"currentColor\"><\/path><\/svg><svg style=\"fill: #999;color:#999\" class=\"arrow-unsorted-368013\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"10px\" height=\"10px\" viewBox=\"0 0 24 24\" version=\"1.2\" baseProfile=\"tiny\"><path d=\"M18.2 9.3l-6.2-6.3-6.2 6.3c-.2.2-.3.4-.3.7s.1.5.3.7c.2.2.4.3.7.3h11c.3 0 .5-.1.7-.3.2-.2.3-.5.3-.7s-.1-.5-.3-.7zM5.8 14.7l6.2 6.3 6.2-6.3c.2-.2.3-.5.3-.7s-.1-.5-.3-.7c-.2-.2-.4-.3-.7-.3h-11c-.3 0-.5.1-.7.3-.2.2-.3.5-.3.7s.1.5.3.7z\"\/><\/svg><\/span><\/span><\/span><\/a><\/span><\/div>\n<nav><ul class='ez-toc-list ez-toc-list-level-1 ' ><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-1\" href=\"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/annotation-concepts-in-mikaia\/#Annotations_and_Annotation_Classes_for_Whole-Slide-Images\" >Annotations and Annotation Classes for Whole-Slide-Images<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-2\" href=\"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/annotation-concepts-in-mikaia\/#Moving_Annotations_into_another_Class\" >Moving Annotations into another Class<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-3\" href=\"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/annotation-concepts-in-mikaia\/#Annotation_Shapes\" >Annotation Shapes<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-4\" href=\"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/annotation-concepts-in-mikaia\/#Annotation_Tools\" >Annotation Tools<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-5\" href=\"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/annotation-concepts-in-mikaia\/#Managing_Annotation_Classes_in_Groups\" >Managing Annotation Classes in Groups<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-6\" href=\"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/annotation-concepts-in-mikaia\/#Annotation_Class_Tags\" >Annotation Class Tags<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-7\" href=\"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/annotation-concepts-in-mikaia\/#Special_Classes_%E2%80%9CIgnore%E2%80%9D_%E2%80%9CTissue%E2%80%9D_%E2%80%9CScan_Areas%E2%80%9D\" >Special Classes: &#8220;Ignore&#8221;, &#8220;Tissue&#8221;, &#8220;Scan Areas&#8221;<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-8\" href=\"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/annotation-concepts-in-mikaia\/#Before_analysis_Tissue_detection_division_into_Scan_Areas_and_ROIs\" >Before analysis: Tissue detection, division into Scan Areas and ROIs<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-9\" href=\"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/annotation-concepts-in-mikaia\/#Tissue_Microarrays_TMAs\" >Tissue Microarrays (TMAs)<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-10\" href=\"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/annotation-concepts-in-mikaia\/#How_to_select_Annotations\" >How to select Annotations<\/a><\/li><\/ul><\/nav><\/div>\n<h2 class=\"wp-block-heading\"><span class=\"ez-toc-section\" id=\"Annotations_and_Annotation_Classes_for_Whole-Slide-Images\"><\/span>Annotations and Annotation Classes for Whole-Slide-Images<span class=\"ez-toc-section-end\"><\/span><\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Annotation Class<\/strong> (example &#8220;Tumor&#8221;)\n<ul class=\"wp-block-list\">\n<li>Attributes\n<ul class=\"wp-block-list\">\n<li>Class name<\/li>\n\n\n\n<li>Tags<\/li>\n\n\n\n<li>Shape Style, fill color, outline color, outline width, transparency<\/li>\n\n\n\n<li>Text Style: color, font size<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>Annotations<\/strong>\n<ul class=\"wp-block-list\">\n<li>Annotation 1\n<ul class=\"wp-block-list\">\n<li>Attributes: position, annotation type (rectangle, polygon, &#8230;), shape, z-order, text<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li>Annotation 2<\/li>\n\n\n\n<li>Annotation N<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n\n\n\n<p>Annotations are drawn manually by the user or they are generated by an app. An annotation always belongs to an <strong>Annotation Class<\/strong>. A class name could be &#8220;Tumor&#8221;, &#8220;positive cells&#8221;, or similar. When a user draws a new annotation, it is added to the currently selected class. Classes are listed in the &#8220;Annotations&#8221; side bar. When no class exists yet, a new class with a placeholder name is automatically created. Class names must be unique. Individual annotations cannot be styled, but instead the <strong>Annotation Style <\/strong>(appearance) is a property of the annotation class. Annotations of different types (rectangle, polygon, brush, &#8230;) can be mixed in the same class.<\/p>\n\n\n\n<p>Each annotation can have a text. Additionally, when an annotation is selected, its position and area (in \u00b5m\u00b2) are displayed in a small text box right next to the annotation.  <\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><span class=\"ez-toc-section\" id=\"Moving_Annotations_into_another_Class\"><\/span>Moving Annotations into another Class<span class=\"ez-toc-section-end\"><\/span><\/h2>\n\n\n\n<p>Annotations can be assigned to another class in two ways:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>First select one or multiple annotations (see chapter further below on ways for selecting annotations), and then click &#8220;<strong>Adopt selected annotations<\/strong>&#8221; in the target class&#8217;s drop down menu (in the Annotation side panel).<\/li>\n\n\n\n<li>The <strong>Class Changer Brush<\/strong> will move all annotations that it touches into the currently selected class. The brush&#8217;s shape is a circle and its diameter can be changed via the toolbar or by with CTRL+mouse wheel.<\/li>\n<\/ol>\n\n\n\n<h2 class=\"wp-block-heading\"><span class=\"ez-toc-section\" id=\"Annotation_Shapes\"><\/span>Annotation Shapes<span class=\"ez-toc-section-end\"><\/span><\/h2>\n\n\n\n<p>By now, it is clear how annotations in MIKAIA<sup>\u00ae<\/sup> are organized in classes. Each annotation is of a certain shape type. Currently supported types are: <strong>line, rectangle, ellipse, polygon, path, cell, emoji\/icon, margin, heatmap \/ mask<\/strong>. The difference between polygons and shapes is that a shape can have holes and a polygon cannot. Actions are available to convert various shapes into a polygon or path. When a path is converted into a polygon, all holes become independent polygons. For paths, advanced operations are available to fuse, subtract, intersect, or clip-to-tissue. The margin annotation is a polyline, but its width can be interactively changed, which makes it ideal for marking the tumor micro environment. Cell annotations may comprise two polygons: one that outlines the nucleus and one that outlines the cell boundary. In contrast to all other types, heatmaps and masks are not vectorized but rasterized. Except for cell and heatmap\/mask annotations, all other annotations can be drawn manually.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><span class=\"ez-toc-section\" id=\"Annotation_Tools\"><\/span>Annotation Tools<span class=\"ez-toc-section-end\"><\/span><\/h2>\n\n\n\n<p>Various annotation tools are available: The pointing-hand tool is used to pan in the slide or select single annotations. The <strong>Selection Tool<\/strong> can be used to multi-select annotations by drawing a rectangle. The line, emoji, rectangle, and ellipse tools should not require further explanation. The <strong>Point Tool<\/strong>, in contrast to the <strong>Ellipse Tool<\/strong>, creates a fixed-sized circle at each click. The <strong>Pen Tool<\/strong> creates polygons and can be used in one of two ways: by keeping the mouse button pressed or by clicking for each additional line segment. A right-click removes the last segment. The tool allows to pan without interrupting the current annotation. This tools should also be the preferred tool when using a <strong>touchscreen <\/strong>with a stylus pen (e.g., WACOM tablets are popular) for drawing annotations. The <strong>Margin Tool<\/strong> creates a polyline and the mouse-wheel is used to configure its width. The <strong>Brush Tool<\/strong> draws or extends paths. By pressing the ALT-button, the Brush Tool becomes the <strong>Eraser Tool<\/strong>. Similarly, the <strong>Magic Brush Tool<\/strong> also draws or extends paths and pressing the ALT-button turns it into the <strong>Magic Eraser Tool<\/strong>. The Magic Brush and Eraser clip the round shape to the underlying image content. The tolerance and radius can be configured. They also work well in conjunction with Stain Unmixing. The Magic Brush can also be used to create cell segmentation by hovering over a cell, which previews the outline, and then clicking once the outline fits nicely. The <strong>Class Changer Tool<\/strong> is a brush that does not create annotations but moves all annotations it touches into the currently selected class.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><span class=\"ez-toc-section\" id=\"Managing_Annotation_Classes_in_Groups\"><\/span>Managing Annotation Classes in Groups <span class=\"ez-toc-section-end\"><\/span><\/h2>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<ul class=\"wp-block-list\">\n<li>Group 1\n<ul class=\"wp-block-list\">\n<li>Annotation Class 1 <\/li>\n\n\n\n<li>Annotation Class 2<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li>Group 2\n<ul class=\"wp-block-list\">\n<li>Annotation Class 3<\/li>\n\n\n\n<li>Annotation Class 4<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"322\" height=\"206\" src=\"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2024\/01\/image-5.png\" alt=\"\" class=\"wp-image-1747\" srcset=\"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2024\/01\/image-5.png 322w, https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2024\/01\/image-5-300x192.png 300w, https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2024\/01\/image-5-270x173.png 270w\" sizes=\"(max-width: 322px) 100vw, 322px\" \/><\/figure>\n<\/div>\n<\/div>\n\n\n\n<p>Some image analysis apps, in particular those detecting cells, generate thousands of annotations in quite a lot of annotation classes. Beginning with MIKAIA<sup>\u00ae<\/sup> v1.5, classes can be grouped together in the annotations class list. This helps keeping an overview. Annotation Class groups can be collapsed so that the Annotation Class side panel does not get cluttered. Further actions are available from an Annotation Class Group: show\/hide all classes, select all annotations, delete all annotations, and more.<\/p>\n\n\n\n<p>Annotation Class Groups are used for different purposes by the various image analysis apps. For instance, the IHC Cell Detection App uses one group per scan area, when a slide contains more than one specimen. The Fluorescence Cell Analysis App can create multiple annotations per cell, when more than one mode is enabled by the user. It will then group the respective class types: all &#8220;per-marker&#8221; classes (&#8220;CD8+&#8221;, Ki67+&#8221;, &#8230;) all &#8220;per-marker-combination&#8221; classes (&#8220;A (CD8<sup>+<\/sup>Ki67<sup>+<\/sup>), B (CD3<sup>+<\/sup>CD8<sup>+<\/sup>Ki67<sup>+<\/sup>), ..), and all cluster classes (&#8220;Cluster 1&#8221;, &#8220;Cluster 2). The user can also group classes manually simply by dragging one class on top of another.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><span class=\"ez-toc-section\" id=\"Annotation_Class_Tags\"><\/span>Annotation Class Tags<span class=\"ez-toc-section-end\"><\/span><\/h2>\n\n\n\n<p><strong>Annotation Class Tags<\/strong> are another concept that helps dealing with many annotation classes. Most apps automatically assign tags to the classes generated during an analysis. Tags can also be created manually by the user. A tag is a word that can be assigned to one or more classes. Annotation class tags are displayed at the bottom of the annotation side panel as blue rounded buttons. By clicking a tag button, all tagged classes&#8217; visibility will be toggled at once, i.e., they will be shown or hidden with a single click. Similarly, in controls (used in various places), where the user is asked to select one or more classes, the class selection can be toggled by using tags. This is convenient when many classes exist.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><span class=\"ez-toc-section\" id=\"Special_Classes_%E2%80%9CIgnore%E2%80%9D_%E2%80%9CTissue%E2%80%9D_%E2%80%9CScan_Areas%E2%80%9D\"><\/span>Special Classes: &#8220;Ignore&#8221;, &#8220;Tissue&#8221;, &#8220;Scan Areas&#8221;<span class=\"ez-toc-section-end\"><\/span><\/h2>\n\n\n\n<p>3 classes have a special meaning:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Tissue class<\/strong>: This class outlines all tissue, i.e., the foreground, in a scan. The &#8220;Tissue&#8221; class is generated by the <strong>Tissue Detection App<\/strong> (which is one of the free image analysis app that is also included in MIKAIA<sup>\u00ae<\/sup> lite), but can also be created manually. If no tissue has been detected yet, many other apps will also run the Tissue Detection App implicitly as a pre-processing step in order to prevent wasting time on analyzing background. If a Tissue class already exists, it will generally be re-used instead of re-computing it. This allows also to create manual corrections to the &#8220;Tissue&#8221; class before running the &#8220;real&#8221; more time-consuming analysis.<\/li>\n\n\n\n<li><strong>Ignore class<\/strong>: This class is a means of preventing false positive detections. It is currently honored by the IHC Cell Detection App, HE Cell Detection App, Mask-by-Color App and FL Colocalization App. When these apps detect a new cell, they will check if it is contained in an &#8220;Ignore&#8221; annotation and if that is the case, treat it as a false positive, i.e., delete it and not add it to the scene. The area of &#8220;Ignore&#8221; annotations is not subtracted from the foreground area, which makes a difference when computing the cell density, stated in cells\/mm\u00b2. <br>If an area shall be ignored and it also shall not be counted as foreground, then instead a hole can be added to the Tissue class using the <strong>Eraser or Magic Eraser Tools<\/strong>.<\/li>\n\n\n\n<li><strong>Scan Areas<\/strong> class: Sometimes glass slides contain more than one specimen, e.g., to save scanning time or when working with tissue micro arrays (TMA). When glass slides contain multiple specimens, it is important to calculate statistics such as tissue area (in mm\u00b2), DAB+ cell density (in cells\/mm\u00b2), # hotspots, # cell clusters and histogram over cluster sizes separately for each specimen. To this end, a scan can be subdivided into <strong>Scan Areas<\/strong>. The user creates the special class and draws rectangles. Each rectangle will be treated as a scan area. The rectangles&#8217; annotation text will be treated as the scan area name and shows up in the CSV exported after an image analysis. When no texts are entered, scan areas will be denoted as &#8220;Scan Area 1&#8221;, &#8220;Scan Area 2&#8221; and so forth. When working with Zeiss CZI, scan areas can even be automatically extracted from the metadata of the whole-slide-image file. For other scan formats this is currently (MIKAIA<sup>\u00ae<\/sup> v1.5) not yet possible.<\/li>\n<\/ul>\n\n\n\n<p>Special classes can be created from the &#8220;Annotations&#8221; side panel&#8217;s drop-down menu: <\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"476\" height=\"470\" src=\"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2024\/01\/image.png\" alt=\"\" class=\"wp-image-1740\" srcset=\"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2024\/01\/image.png 476w, https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2024\/01\/image-300x296.png 300w, https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2024\/01\/image-370x365.png 370w, https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2024\/01\/image-270x267.png 270w\" sizes=\"(max-width: 476px) 100vw, 476px\" \/><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><span class=\"ez-toc-section\" id=\"Before_analysis_Tissue_detection_division_into_Scan_Areas_and_ROIs\"><\/span>Before analysis: Tissue detection, division into Scan Areas and ROIs<span class=\"ez-toc-section-end\"><\/span><\/h2>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:50%\">\n<p><\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"671\" height=\"911\" src=\"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2024\/01\/image-4.png\" alt=\"\" class=\"wp-image-1745\" srcset=\"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2024\/01\/image-4.png 671w, https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2024\/01\/image-4-221x300.png 221w, https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2024\/01\/image-4-370x502.png 370w, https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2024\/01\/image-4-270x367.png 270w, https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2024\/01\/image-4-570x774.png 570w\" sizes=\"(max-width: 671px) 100vw, 671px\" \/><figcaption class=\"wp-element-caption\">Before analysis: User has run Tissue Detection and drawn &#8220;Metastasis&#8221; and &#8220;Ignore&#8221; annotations <\/figcaption><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:50%\">\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"674\" height=\"910\" src=\"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2024\/01\/image-3.png\" alt=\"\" class=\"wp-image-1744\" srcset=\"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2024\/01\/image-3.png 674w, https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2024\/01\/image-3-222x300.png 222w, https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2024\/01\/image-3-370x500.png 370w, https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2024\/01\/image-3-270x365.png 270w, https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2024\/01\/image-3-570x770.png 570w\" sizes=\"(max-width: 674px) 100vw, 674px\" \/><figcaption class=\"wp-element-caption\">After analysis: the detected cells and hotspots are grouped by scan area and ROI<\/figcaption><\/figure>\n<\/div>\n<\/div>\n\n\n\n<p>It is surely possible to batch-analyze a cohort of immuno-stained scan with the <a href=\"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/mikaia-ihc-cell-detection-app\/\">IHC Cell Detection App <\/a>without doing any preparations. As the first step, when analyzing each slide, the app will automatically use the Tissue Detection App to distinguish foreground from background and then divide only the foreground into (overlapping) tiles and analyze them one by one.<\/p>\n\n\n\n<p>But in some cases, it is necessary or advantageous to prepare each slide beforehand, for instance when slides need to be divided into scan areas or when additional ROIs such as Metastases need to be drawn in order to be able to distinguish between DAB+ cells <strong>inside a metastasis vs. outside a metastasis<\/strong> (<strong>vs. near to the metastasis<\/strong> when marking the tumor micro environment with zones\/margins). <\/p>\n\n\n\n<p>The <strong>hierarchy of a scan<\/strong> with two specimens (e.g., in a preclinical study: sections from two mice) where annotations outline metastases (either drawn manually or with the Mask By Color App or with the AI Author App) looks like this:  <\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>whole-slide-image\n<ul class=\"wp-block-list\">\n<li>&#8220;Scan Area Specimen 1&#8221;\n<ul class=\"wp-block-list\">\n<li>Tissue (Foreground)\n<ul class=\"wp-block-list\">\n<li>&#8220;Metastasis&#8221;<br>meaning: inside any of the annotations in this class<\/li>\n\n\n\n<li>&#8220;Rest of tissue&#8221;<br>meaning: not inside any annotation of the &#8220;Metastasis&#8221; class, but inside a &#8220;Tissue&#8221; annotation and inside the &#8220;Scan Area Specimen 1&#8221; rectangle. <\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li>&#8220;Scan Area Specimen 2&#8221;\n<ul class=\"wp-block-list\">\n<li>Tissue\n<ul class=\"wp-block-list\">\n<li>Metastasis<\/li>\n\n\n\n<li>Rest of tissue<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<p>In the configuration panel, the IHC Cell Detection App then needs to be told that the class &#8220;Metastasis&#8221; shall be regarded as a ROI by selecting it in the class list in the section &#8220;Divide by ROIs&#8221;. Additionally, the &#8220;Divide by Scan Areas&#8221; switch needs to be toggled on, which is the case by default. <\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-top is-layout-flow wp-block-column-is-layout-flow\">\n<figure data-wp-context=\"{&quot;imageId&quot;:&quot;6a0140eee31a3&quot;}\" data-wp-interactive=\"core\/image\" data-wp-key=\"6a0140eee31a3\" class=\"wp-block-image size-medium wp-lightbox-container\"><img decoding=\"async\" width=\"300\" height=\"209\" data-wp-class--hide=\"state.isContentHidden\" data-wp-class--show=\"state.isContentVisible\" data-wp-init=\"callbacks.setButtonStyles\" data-wp-on--click=\"actions.showLightbox\" data-wp-on--load=\"callbacks.setButtonStyles\" data-wp-on-window--resize=\"callbacks.setButtonStyles\" src=\"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2024\/01\/image-2-300x209.png\" alt=\"\" class=\"wp-image-1742\" srcset=\"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2024\/01\/image-2-300x209.png 300w, https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2024\/01\/image-2-370x258.png 370w, https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2024\/01\/image-2-270x188.png 270w, https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2024\/01\/image-2.png 430w\" sizes=\"(max-width: 300px) 100vw, 300px\" \/><button\n\t\t\tclass=\"lightbox-trigger\"\n\t\t\ttype=\"button\"\n\t\t\taria-haspopup=\"dialog\"\n\t\t\taria-label=\"Enlarge\"\n\t\t\tdata-wp-init=\"callbacks.initTriggerButton\"\n\t\t\tdata-wp-on--click=\"actions.showLightbox\"\n\t\t\tdata-wp-style--right=\"state.imageButtonRight\"\n\t\t\tdata-wp-style--top=\"state.imageButtonTop\"\n\t\t>\n\t\t\t<svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"12\" height=\"12\" fill=\"none\" viewBox=\"0 0 12 12\">\n\t\t\t\t<path fill=\"#fff\" d=\"M2 0a2 2 0 0 0-2 2v2h1.5V2a.5.5 0 0 1 .5-.5h2V0H2Zm2 10.5H2a.5.5 0 0 1-.5-.5V8H0v2a2 2 0 0 0 2 2h2v-1.5ZM8 12v-1.5h2a.5.5 0 0 0 .5-.5V8H12v2a2 2 0 0 1-2 2H8Zm2-12a2 2 0 0 1 2 2v2h-1.5V2a.5.5 0 0 0-.5-.5H8V0h2Z\" \/>\n\t\t\t<\/svg>\n\t\t<\/button><\/figure>\n<\/div>\n<\/div>\n\n\n\n<h2 class=\"wp-block-heading\"><span class=\"ez-toc-section\" id=\"Tissue_Microarrays_TMAs\"><\/span>Tissue Microarrays (TMAs)<span class=\"ez-toc-section-end\"><\/span><\/h2>\n\n\n\n<p>When working with TMAs, the Tissue Detection App can be configured to automatically create a <strong>Scan Area<\/strong> per TMA Core (see figure below). This way, downstream analysis apps such as the IHC Cell Detection App, the FL Cell Analysis App or the Annotation Metrics App will automatically compute statististics individually per TMA core.<\/p>\n\n\n\n<p>Cores are automatically named: rows A, B, C,&#8230; columns 1,2,3, &#8230; ,e.g., the top left core is &#8220;A1&#8221;.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1024\" height=\"626\" src=\"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2024\/05\/image-1-1024x626.png\" alt=\"\" class=\"wp-image-2112\" srcset=\"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2024\/05\/image-1-1024x626.png 1024w, https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2024\/05\/image-1-300x183.png 300w, https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2024\/05\/image-1-768x469.png 768w, https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2024\/05\/image-1-1536x939.png 1536w, https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2024\/05\/image-1-370x226.png 370w, https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2024\/05\/image-1-270x165.png 270w, https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2024\/05\/image-1-570x348.png 570w, https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2024\/05\/image-1-740x452.png 740w, https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2024\/05\/image-1.png 1790w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\"><span class=\"ez-toc-section\" id=\"How_to_select_Annotations\"><\/span>How to select Annotations<span class=\"ez-toc-section-end\"><\/span><\/h2>\n\n\n\n<p>Multiple ways are possible to (multi-) select annotations<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Hand Tool<\/strong>: Annotations can be selected by clicking on them with the Hand Tool enabled. When the CTRL button is pressed simultaneously, the clicked annotation is added to the already selected annotations (multi-selection). <\/li>\n\n\n\n<li><strong>Selection Tool<\/strong>: Alternatively, the<strong> <\/strong>Selection Tool<strong> <\/strong>can be used to draw a rectangle, which will select all completely contained annotations. <\/li>\n\n\n\n<li><strong>Select all intersecting annotations<\/strong>: When one or multiple annotations are selected, the action to &#8220;select all intersecting annotations&#8221; becomes available. It will select all annotations from all classes whose shape intersects with any of the currently selected annotations.  <\/li>\n\n\n\n<li><strong>Select intersecting subset<\/strong>: When one or multiple annotations are selected, the action to select an intersecting subset becomes available. A dialog will show up, where the user can choose to select annotations from certain classes and\/or with certain shapes.<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>The first step to creating a supervised Digital Pathology AI is to annotate whole-slide-images. MIKAIA\u00ae uses various annotation concepts, and understanding these concepts will make your experience with MIKAIA\u00ae much smoother and more efficient. Annotations and Annotation Classes for Whole-Slide-Images Annotations are drawn manually by the user or they are generated by an app. An [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":1750,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3,28],"tags":[110,7,29,78],"coauthors":[56],"class_list":["post-1728","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-digital-pathology","category-mikaia-university","tag-concept","tag-mikaia","tag-mikaia-app-note","tag-whole-slide-images"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.5 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Annotation Concepts in MIKAIA for Whole-Slide-Images - SMART SENSING insights<\/title>\n<meta name=\"description\" content=\"The first step to creating a supervised Digital Pathology AI is to annotate whole-slide-images. This article explains various annotation concepts in MIKAIA.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/annotation-concepts-in-mikaia\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Annotation Concepts in MIKAIA for Whole-Slide-Images - SMART SENSING insights\" \/>\n<meta property=\"og:description\" content=\"The first step to creating a supervised Digital Pathology AI is to annotate whole-slide-images. This article explains various annotation concepts in MIKAIA.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/annotation-concepts-in-mikaia\/\" \/>\n<meta property=\"og:site_name\" content=\"SMART SENSING insights\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/FraunhoferIIS\" \/>\n<meta property=\"article:published_time\" content=\"2024-01-05T12:38:49+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-10-29T12:53:03+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2024\/01\/1.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1101\" \/>\n\t<meta property=\"og:image:height\" content=\"900\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Volker Bruns\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Volker Bruns\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"11 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/annotation-concepts-in-mikaia\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/annotation-concepts-in-mikaia\\\/\"},\"author\":{\"name\":\"Volker Bruns\",\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/#\\\/schema\\\/person\\\/fd37d35c4d3576840cf0bc3f74eafb98\"},\"headline\":\"Annotation Concepts in MIKAIA for Whole-Slide-Images\",\"datePublished\":\"2024-01-05T12:38:49+00:00\",\"dateModified\":\"2025-10-29T12:53:03+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/annotation-concepts-in-mikaia\\\/\"},\"wordCount\":2003,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/annotation-concepts-in-mikaia\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/wp-content\\\/uploads\\\/2024\\\/01\\\/1.jpg\",\"keywords\":[\"Concept\",\"MIKAIA\u00ae\",\"MIKAIA\u00ae App Note\",\"Whole-Slide-Images\"],\"articleSection\":[\"Digital Pathology\",\"MIKAIA University\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/annotation-concepts-in-mikaia\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/annotation-concepts-in-mikaia\\\/\",\"url\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/annotation-concepts-in-mikaia\\\/\",\"name\":\"Annotation Concepts in MIKAIA for Whole-Slide-Images - SMART SENSING insights\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/annotation-concepts-in-mikaia\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/annotation-concepts-in-mikaia\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/wp-content\\\/uploads\\\/2024\\\/01\\\/1.jpg\",\"datePublished\":\"2024-01-05T12:38:49+00:00\",\"dateModified\":\"2025-10-29T12:53:03+00:00\",\"description\":\"The first step to creating a supervised Digital Pathology AI is to annotate whole-slide-images. This article explains various annotation concepts in MIKAIA.\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/annotation-concepts-in-mikaia\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/annotation-concepts-in-mikaia\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/annotation-concepts-in-mikaia\\\/#primaryimage\",\"url\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/wp-content\\\/uploads\\\/2024\\\/01\\\/1.jpg\",\"contentUrl\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/wp-content\\\/uploads\\\/2024\\\/01\\\/1.jpg\",\"width\":1101,\"height\":900,\"caption\":\"Copyright: Fraunhofer IIS\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/annotation-concepts-in-mikaia\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Annotation Concepts in MIKAIA for Whole-Slide-Images\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/#website\",\"url\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/\",\"name\":\"SMART SENSING insights\",\"description\":\"learn more about our focus research areas sensor technology, electronics, and artificial intelligence\",\"publisher\":{\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/#organization\",\"name\":\"Fraunhofer IIS\",\"url\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Fraunhofer-IIS-1.png\",\"contentUrl\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Fraunhofer-IIS-1.png\",\"width\":826,\"height\":299,\"caption\":\"Fraunhofer IIS\"},\"image\":{\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/FraunhoferIIS\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/fraunhofer-iis\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/#\\\/schema\\\/person\\\/fd37d35c4d3576840cf0bc3f74eafb98\",\"name\":\"Volker Bruns\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/wp-content\\\/uploads\\\/2023\\\/05\\\/cropped-cropped-Bruns_Volker_DSC1679_Pulkert-scaled-1-96x96.jpg6283dd75b937ac114f8ae92ee4dbfc95\",\"url\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/wp-content\\\/uploads\\\/2023\\\/05\\\/cropped-cropped-Bruns_Volker_DSC1679_Pulkert-scaled-1-96x96.jpg\",\"contentUrl\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/wp-content\\\/uploads\\\/2023\\\/05\\\/cropped-cropped-Bruns_Volker_DSC1679_Pulkert-scaled-1-96x96.jpg\",\"caption\":\"Volker Bruns\"},\"description\":\"Volker is a digital pathology and spatial biology enthusiast with a computer science background. Volker and his team develop commercial image analysis software for digital pathology and offer contract development, as well as image analysis as a service in the life sciences.\",\"sameAs\":[\"https:\\\/\\\/orcid.org\\\/0000-0001-7693-3542\",\"https:\\\/\\\/www.linkedin.com\\\/in\\\/volker-bruns\\\/\",\"https:\\\/\\\/www.youtube.com\\\/playlist?list=PL1xbox4kZP0OL7PskwG_xYgmu9iV_N5Fd\"],\"url\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/author\\\/volker-brunsiis-fraunhofer-de\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Annotation Concepts in MIKAIA for Whole-Slide-Images - SMART SENSING insights","description":"The first step to creating a supervised Digital Pathology AI is to annotate whole-slide-images. This article explains various annotation concepts in MIKAIA.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/annotation-concepts-in-mikaia\/","og_locale":"en_US","og_type":"article","og_title":"Annotation Concepts in MIKAIA for Whole-Slide-Images - SMART SENSING insights","og_description":"The first step to creating a supervised Digital Pathology AI is to annotate whole-slide-images. This article explains various annotation concepts in MIKAIA.","og_url":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/annotation-concepts-in-mikaia\/","og_site_name":"SMART SENSING insights","article_publisher":"https:\/\/www.facebook.com\/FraunhoferIIS","article_published_time":"2024-01-05T12:38:49+00:00","article_modified_time":"2025-10-29T12:53:03+00:00","og_image":[{"width":1101,"height":900,"url":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2024\/01\/1.jpg","type":"image\/jpeg"}],"author":"Volker Bruns","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Volker Bruns","Est. reading time":"11 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/annotation-concepts-in-mikaia\/#article","isPartOf":{"@id":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/annotation-concepts-in-mikaia\/"},"author":{"name":"Volker Bruns","@id":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/#\/schema\/person\/fd37d35c4d3576840cf0bc3f74eafb98"},"headline":"Annotation Concepts in MIKAIA for Whole-Slide-Images","datePublished":"2024-01-05T12:38:49+00:00","dateModified":"2025-10-29T12:53:03+00:00","mainEntityOfPage":{"@id":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/annotation-concepts-in-mikaia\/"},"wordCount":2003,"commentCount":0,"publisher":{"@id":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/#organization"},"image":{"@id":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/annotation-concepts-in-mikaia\/#primaryimage"},"thumbnailUrl":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2024\/01\/1.jpg","keywords":["Concept","MIKAIA\u00ae","MIKAIA\u00ae App Note","Whole-Slide-Images"],"articleSection":["Digital Pathology","MIKAIA University"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/annotation-concepts-in-mikaia\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/annotation-concepts-in-mikaia\/","url":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/annotation-concepts-in-mikaia\/","name":"Annotation Concepts in MIKAIA for Whole-Slide-Images - SMART SENSING insights","isPartOf":{"@id":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/#website"},"primaryImageOfPage":{"@id":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/annotation-concepts-in-mikaia\/#primaryimage"},"image":{"@id":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/annotation-concepts-in-mikaia\/#primaryimage"},"thumbnailUrl":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2024\/01\/1.jpg","datePublished":"2024-01-05T12:38:49+00:00","dateModified":"2025-10-29T12:53:03+00:00","description":"The first step to creating a supervised Digital Pathology AI is to annotate whole-slide-images. This article explains various annotation concepts in MIKAIA.","breadcrumb":{"@id":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/annotation-concepts-in-mikaia\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/annotation-concepts-in-mikaia\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/annotation-concepts-in-mikaia\/#primaryimage","url":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2024\/01\/1.jpg","contentUrl":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2024\/01\/1.jpg","width":1101,"height":900,"caption":"Copyright: Fraunhofer IIS"},{"@type":"BreadcrumbList","@id":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/annotation-concepts-in-mikaia\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/"},{"@type":"ListItem","position":2,"name":"Annotation Concepts in MIKAIA for Whole-Slide-Images"}]},{"@type":"WebSite","@id":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/#website","url":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/","name":"SMART SENSING insights","description":"learn more about our focus research areas sensor technology, electronics, and artificial intelligence","publisher":{"@id":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/#organization","name":"Fraunhofer IIS","url":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/#\/schema\/logo\/image\/","url":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2023\/06\/Fraunhofer-IIS-1.png","contentUrl":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2023\/06\/Fraunhofer-IIS-1.png","width":826,"height":299,"caption":"Fraunhofer IIS"},"image":{"@id":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/FraunhoferIIS","https:\/\/www.linkedin.com\/company\/fraunhofer-iis"]},{"@type":"Person","@id":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/#\/schema\/person\/fd37d35c4d3576840cf0bc3f74eafb98","name":"Volker Bruns","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2023\/05\/cropped-cropped-Bruns_Volker_DSC1679_Pulkert-scaled-1-96x96.jpg6283dd75b937ac114f8ae92ee4dbfc95","url":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2023\/05\/cropped-cropped-Bruns_Volker_DSC1679_Pulkert-scaled-1-96x96.jpg","contentUrl":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2023\/05\/cropped-cropped-Bruns_Volker_DSC1679_Pulkert-scaled-1-96x96.jpg","caption":"Volker Bruns"},"description":"Volker is a digital pathology and spatial biology enthusiast with a computer science background. Volker and his team develop commercial image analysis software for digital pathology and offer contract development, as well as image analysis as a service in the life sciences.","sameAs":["https:\/\/orcid.org\/0000-0001-7693-3542","https:\/\/www.linkedin.com\/in\/volker-bruns\/","https:\/\/www.youtube.com\/playlist?list=PL1xbox4kZP0OL7PskwG_xYgmu9iV_N5Fd"],"url":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/author\/volker-brunsiis-fraunhofer-de\/"}]}},"_links":{"self":[{"href":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-json\/wp\/v2\/posts\/1728","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-json\/wp\/v2\/comments?post=1728"}],"version-history":[{"count":28,"href":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-json\/wp\/v2\/posts\/1728\/revisions"}],"predecessor-version":[{"id":4529,"href":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-json\/wp\/v2\/posts\/1728\/revisions\/4529"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-json\/wp\/v2\/media\/1750"}],"wp:attachment":[{"href":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-json\/wp\/v2\/media?parent=1728"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-json\/wp\/v2\/categories?post=1728"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-json\/wp\/v2\/tags?post=1728"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-json\/wp\/v2\/coauthors?post=1728"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}