{"id":861,"date":"2023-09-22T17:49:02","date_gmt":"2023-09-22T15:49:02","guid":{"rendered":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/?p=861"},"modified":"2025-10-29T21:54:35","modified_gmt":"2025-10-29T20:54:35","slug":"robust-ai-models-for-digital-pathology","status":"publish","type":"post","link":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/robust-ai-models-for-digital-pathology\/","title":{"rendered":"Into the Wild: Building Robust AI Models for Digital Pathology"},"content":{"rendered":"\n<p><em>Digital pathology and artificial intelligence (AI) go hand in hand to revolutionize medical diagnostics. However, pathology AI models must be able to cope with a very large variety of histological images if they are to be deployed into the field (i.e., \u201cinto the wild\u201d). AIs are, therefore, required to be very robust. This robustness in turn is highly dependent on the quality of the underlying datasets that were used to train the AI. <a href=\"https:\/\/www.linkedin.com\/in\/michaela-benz-096aab72\/\">Dr. Michaela Benz<\/a>, chief scientist at Fraunhofer IIS and responsible for the scientific coordination of the development of the image analysis software <a href=\"http:\/\/mikaia.ai\">MIKAIA<\/a>\u00ae, and Dr. Petr Kuritcyn, senior scientist and AI expert, explain the importance of robust AI models for digital pathology and the challenges involved.<\/em><\/p>\n\n\n<div class=\"wp-block-image is-style-default\">\n<figure class=\"aligncenter size-full\"><img decoding=\"async\" width=\"2560\" height=\"1440\" src=\"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2023\/09\/Michaela-Benz_Petr-Kuritcyn-edited-scaled.jpg\" alt=\"Dr. Michaela Benz and Dr. Petr Kuritcyn\" class=\"wp-image-904\" srcset=\"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2023\/09\/Michaela-Benz_Petr-Kuritcyn-edited-scaled.jpg 2560w, https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2023\/09\/Michaela-Benz_Petr-Kuritcyn-edited-300x169.jpg 300w, https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2023\/09\/Michaela-Benz_Petr-Kuritcyn-edited-1024x576.jpg 1024w, https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2023\/09\/Michaela-Benz_Petr-Kuritcyn-edited-768x432.jpg 768w, https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2023\/09\/Michaela-Benz_Petr-Kuritcyn-edited-1536x864.jpg 1536w, https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2023\/09\/Michaela-Benz_Petr-Kuritcyn-edited-2048x1152.jpg 2048w, https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2023\/09\/Michaela-Benz_Petr-Kuritcyn-edited-370x208.jpg 370w, https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2023\/09\/Michaela-Benz_Petr-Kuritcyn-edited-270x152.jpg 270w, https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2023\/09\/Michaela-Benz_Petr-Kuritcyn-edited-570x321.jpg 570w, https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2023\/09\/Michaela-Benz_Petr-Kuritcyn-edited-740x416.jpg 740w\" sizes=\"(max-width: 2560px) 100vw, 2560px\" \/><figcaption class=\"wp-element-caption\">Dr. Michaela Benz and Dr. Petr Kuritcyn<\/figcaption><\/figure>\n<\/div>\n\n\n<p><strong>The use of AI holds great potential for medicine but has not yet become common practice in medical routine. While AI-based image analysis is already standard practice in radiology, microscopy images are still mostly evaluated manually in pathology. What opportunities and possibilities does the use of AI bring to pathology?<\/strong><\/p>\n\n\n\n<p><strong>Petr<\/strong> <strong>Kuritcyn<\/strong>: Currently, digital pathology is still used more often in research than in clinical applications. This will change, however, as pathologists are heavily overworked, and the use of software can help improve the situation. Digital pathologists, for instance, can diagnose while working remotely. Making use of AI based assistance systems also holds great potential when it comes to more trivial tasks such as counting cells. They can very well be taken over by an AI, making daily routines a little bit more efficient. In addition, the results of AI are often very precise and objective, which can further enhance the quality of diagnostics. <\/p>\n\n\n\n<p>Many algorithms and software for this purpose already exist. While the number of AIs that is approved for primary diagnostics for the European market (CE-IVDR) is still small compared to, for instance, radiology, it is growing fast. It\u2019s only a matter of time. However, for AI to be truly implemented in practice, users\u2019 trust in it must be strengthened, and acceptance needs to be fostered. In my view, this is currently happening. On the one hand, because the algorithms are becoming more reliable and achieve very good results, and on the other hand because pathologists are discussing how AI-based methods can be integrated into routine workflows and what benefit these methods can offer.<\/p>\n\n\n\n<p><strong>Can you give us a concrete example for that?<\/strong><\/p>\n\n\n\n<p><strong>Michaela<\/strong> <strong>Benz<\/strong>: A good example from everyday pathology is the characterization of tumors, e.g., Gleason grading for prostate biopsies. This is a very common task that serves as a decision-making basis for therapy selection. Among other things, various so-called biomarkers are used for characterization, which are typically determined by manual evaluation of tissue sections under the microscope by pathologists. Typical tasks in the determination of biomarkers include the counting of certain cell types within a defined area or assessing how strongly a tissue structure has already deviated from its normal healthy layout. <\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>AIs cleared for use in diagnostics today can only do a single and very narrow task. A whole arsenal of AIs is required to assist in the wide variety of tasks pathologists face in their everyday routine.<\/p>\n<cite>Dr. Petr Kuritcyn <\/cite><\/blockquote>\n\n\n\n<p>Another example of a biomarker is the ratio of tumor to stromal area within a tumor. In traditional pathology, the tissue section of a tumor is viewed under a microscope, and pathologists determine the proportion of tumor tissue compared to stroma. This estimation can vary significantly between pathologists and yield subjective results. <a href=\"https:\/\/www.mdpi.com\/2072-6694\/15\/10\/2675\">We have recently shown this in a survey with pathologists from our network.<\/a> With the help of AI, the assessment of tumor-to-stroma ratio can be automated and performed on a larger scale, allowing for more accurate and efficient analyses. <\/p>\n\n\n\n<p>Another use case where the automatic assessment of the tumor-stroma ratio is helpful is in molecular pathology. Cells are scraped off the tumor tissue for molecular diagnostics. When the DNA or RNA from all scraped cells is pooled, it is important that the sample consists predominantly of tumor cells. Otherwise, the results would be polluted. Here, it is helpful to know the exact ratio.<\/p>\n\n\n\n<p><strong>What considerations should be taken into account while developing an AI-based algorithm for digital pathology?<\/strong><\/p>\n\n\n\n<p><strong>Petr<\/strong>: Mimicking or even outperforming an experienced pathologists with an AI is not an easy task at all. Therefore, in the first step, we define a clear objective of what exactly the AI should recognize and this determines the AI output. To make it clear: AIs cleared for use in diagnostics today can only do a single and very narrow task, often one type of score for one particular type of staining in one particular organ. A whole arsenal of AIs is required to assist in the wide variety of tasks pathologists face in their everyday routine. <\/p>\n\n\n\n<p>In the second step, we look at the input, i.e., the data that is provided to the algorithm for analysis. In pathology, that\u2019s usually images from biopsies and tissue sections. Based on this, a data processing chain is developed that takes you from input to output. The number of analysis steps and the algorithms required depend on the complexity of the task. The quality of the data we receive from pathologists is crucial for training the AI algorithms. The more accurate these data sets are, the more accurate the results of our algorithms will be.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">It&#8217;s all about the data, data, data<\/h2>\n\n\n\n<p><strong>Petr, you just pointed out that the availability of high-quality datasets is crucial for obtaining robust AI models. What tips and tricks can you share when it comes to creating and annotating data sets?<\/strong><\/p>\n\n\n\n<p><strong>Petr<\/strong>: Most importantly, data sets need to be representative and cover as closely as possible the heterogenous images the AI will encounter once released into the field. In order to represent this diversity, it is helpful to source samples from multiple centers and digitize the glass slides with different scanners from different vendors. That\u2019s because each lab uses their own staining protocol, and the scanners have different optics, cameras, and also color post processing. In combination these effects lead to surprisingly strong differences in appearance. <\/p>\n\n\n\n<p>Coping with this so called <strong>domain shift<\/strong> problem has almost become its own scientific field. Two approaches are either to <strong>normalize<\/strong> images at runtime before they are analyzed. Here, color transforming algorithms are available or style-transfer AIs can be used to make the unknown slide look more like the slides from the training set. A drawback is, that this requires additional computational power because one AI is employed to make the &#8220;actual&#8221; AI work. <br>Another approach &#8211; and we follow this strategy &#8211; is to train the AI from the start that it cannot rely on subtle color differences. We achieve this by <strong>augmenting<\/strong> our training data set. What we (and many other researchers) do is we duplicate a training image patch and add some random variation to it, e.g. we change the hue or saturation slightly or we unmix the hematoxylin and eosin stains so we can change their intensities separately. You can also simulate camera noise, de-focus, scale or distort, add in JPEG compression artefacts or apply many other augmentation techniques. In the end, <strong>the AI has learnt to cope<\/strong> with these variations and will generalize better. To validate this, we use annotated test datasets that really come from different labs and scanners and measure how our AI performs.      <\/p>\n\n\n\n<p>It is also critical that the data is annotated as accurately as possible. This means that the structures that the AI is supposed to recognize automatically, must be marked with high precision by experts. Ideally, annotations are created independently by multiple pathologists to avoid individual biases. This enhances the robustness of the algorithm.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1024\" height=\"679\" src=\"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2023\/09\/glass-slide-digitization_different-scanners-1024x679.jpg\" alt=\"Intestinal tissue sections (with adenocarcinoma), digitized with scanners from different vendors\" class=\"wp-image-898\" srcset=\"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2023\/09\/glass-slide-digitization_different-scanners-1024x679.jpg 1024w, https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2023\/09\/glass-slide-digitization_different-scanners-300x199.jpg 300w, https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2023\/09\/glass-slide-digitization_different-scanners-768x509.jpg 768w, https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2023\/09\/glass-slide-digitization_different-scanners-1536x1018.jpg 1536w, https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2023\/09\/glass-slide-digitization_different-scanners-2048x1357.jpg 2048w, https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2023\/09\/glass-slide-digitization_different-scanners-370x245.jpg 370w, https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2023\/09\/glass-slide-digitization_different-scanners-270x179.jpg 270w, https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2023\/09\/glass-slide-digitization_different-scanners-570x378.jpg 570w, https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2023\/09\/glass-slide-digitization_different-scanners-740x490.jpg 740w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><figcaption class=\"wp-element-caption\">Intestinal tissue sections (with adenocarcinoma), digitized with scanners from different vendors<\/figcaption><\/figure>\n\n\n\n<p><strong>Michaela<\/strong>: It is important that we precisely understand the task and the data. This includes getting an understanding of the variation present in the images. For example, a colon tumor can vary significantly, depending for instance on the subtype or grade, and the AI needs to recognize all these different variations. But large size alone does not guarantee good quality. For example, if a database is built with a million images of the same tumor type, the AI will not work. A good coverage of the entire diversity is more important than the sheer quantity.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">How big is big enough?<\/h2>\n\n\n\n<p><strong>Can you paint a more detailed picture of just how big these data sets are?<\/strong><\/p>\n\n\n\n<p><strong>Petr<\/strong>: The size of the database depends very much on the AI approach we use and the complexity of the question at hand. For example, the dataset for an AI that distinguishes multiple tissue types obviously needs to be larger than for a simple decision between A and B. However, there is usually no rule of thumb for the required size, it is based on previous experience. For a complex task, however, we are talking about using approximately one million samples in the dataset. To be clear, by sample we do not mean entire whole-slide-images that are sometimes larger than 100.000 x 100.000 pixels, but we mean individual image patches that we extract from these whole-slide-images.<\/p>\n\n\n\n<p><strong>And how does the training process work exactly?<\/strong><\/p>\n\n\n\n<p><strong>Michaela<\/strong>: During the development of the software, several data sets \u2013 typically a <em>training set<\/em>, a <em>validation set<\/em> and a <em>test set<\/em> \u2013 are typically used, each for a different task in the AI development process. These sets contain a large set of examples with their corresponding class labels such as \u201ctumor\u201d or \u201cstroma\u201d. Initially the untrained AI model makes just wild guesses but using the \u201cground truth\u201d annotations provided by the expert, the model is told whether its guess was right or wrong. Every time it adjusts its internal parameters and after a while it has learnt to identify patterns and relationships.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>A large training set alone does not guarantee good quality. If a database is built with a million images of the same tumor type, the AI will still not work well for other types. A good coverage of the entire diversity is more important than the sheer quantity.<\/p>\n<cite>Dr. Michaela Benz<\/cite><\/blockquote>\n\n\n\n<p>For this training procedure the <em>training set<\/em> is used. The <em>validation set<\/em> is used to select the best model out of the various models resulting during the training procedure. The <em>test set<\/em> is initially set aside and not used at all during the training. It is reserved for evaluating on previously unseen data how well (or badly) the fully trained and optimized AI model performs. When a multi-centric data set (meaning data from multiple labs) is available, it is good practice to keep all images from at least one entire lab aside and use them as the test set in order to find out how well the AI generalizes to data from another lab.<\/p>\n\n\n\n<p><strong>What are the particular challenges when developing an AI for pathology? How is this different from other domains such as radiology?<\/strong><\/p>\n\n\n\n<p><strong>Michaela<\/strong>: There are only few fundamental differences. One difference, however, is the size of images. In pathology, one single digital tissue section can be gigantic. An AI that a dermatologist can use to classify photos of moles operates on images that have a size of 5000 x 5000 pixels at the most, and that\u2019s already quite large. A single whole-slide-image scan, however, can easily be 100 times more or larger. Scans typically have a file size of multiple gigabytes despite employing image compression. The large image size has an impact on the selection of image analysis methods, as standard methods often simply would suffer from a too long computation time.<\/p>\n\n\n\n<p>Pathologists examine a specimen at different magnifications. They require an overview, but for some tasks it is important that they zoom in to the point that individual cells are clearly distinguishable. Some AI approaches mimic this by also analyzing the images at multiple resolutions. Whole-slide-image file formats are built this way, too: They are redundant and not only contain the full resolution images, but also zoomed-down versions, so that a low-resolution overview image is immediately available and does not have to be computed.<\/p>\n\n\n\n<p>Additionally, the diversity of tissue in pathology is particularly challenging. To represent this biological diversity in a meaningful way in a data set is very time-consuming. This diversity is underlined by the fact that pathologists typically select a sub-specialty such as neuropathology, dermatopathology, nephropathology, etc.<\/p>\n\n\n\n<p>In many use cases, it is very challenging to generate objective reference labels for training and also evaluating the AI. For example, objective area estimation is a challenging task for a human. In the above-mentioned study, we were able to demonstrate these varying assessments where both experienced pathologists and <a href=\"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/publication-tumor-stroma-ratio-in-colorectal-cancer\/\">our software estimated the area ratio of tumor and stromal tissue<\/a>. The estimations among the pathologists varied significantly, highlighting the challenge of precisely defining ground truth for data annotation. In another project we considered training an AI to detect if there were inflamed areas that have already progressed into an early stage of a malignant tumor in gastrointestinal biopsies of patient with inflammatory bowel disease. However, we learnt that even the most experienced pathologists frequently disagree in this challenging task, which is why also the medical guidelines require a mandatory second opinion. <\/p>\n\n\n\n<p><strong>Thank you, Michaela and Petr, for sharing all these insights. Let\u2019s talk again some time!<\/strong><\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<p>Image copyright (featured image): Fraunhofer IIS \/ Paul Pulkert<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Digital pathology and artificial intelligence (AI) go hand in hand to revolutionize medical diagnostics. However, pathology AI models must be able to cope with a very large variety of histological images if they are to be deployed into the field (i.e., \u201cinto the wild\u201d). AIs are, therefore, required to be very robust. This robustness in [&hellip;]<\/p>\n","protected":false},"author":10,"featured_media":903,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3,35],"tags":[37,36,7],"coauthors":[55],"class_list":["post-861","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-digital-pathology","category-life-science","tag-ai","tag-interview","tag-mikaia"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.5 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Into the Wild: Building Robust AI Models for Digital Pathology - SMART SENSING insights<\/title>\n<meta name=\"description\" content=\"Robust AI models for digital pathology depend on the quality of the underlying data. Learn more on how robustnes is ensured with MIKAIA.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/robust-ai-models-for-digital-pathology\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Into the Wild: Building Robust AI Models for Digital Pathology - SMART SENSING insights\" \/>\n<meta property=\"og:description\" content=\"Robust AI models for digital pathology depend on the quality of the underlying data. Learn more on how robustnes is ensured with MIKAIA.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/robust-ai-models-for-digital-pathology\/\" \/>\n<meta property=\"og:site_name\" content=\"SMART SENSING insights\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/FraunhoferIIS\" \/>\n<meta property=\"article:published_time\" content=\"2023-09-22T15:49:02+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-10-29T20:54:35+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2023\/09\/digital-pathology-software_MICAIA-scaled.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"2560\" \/>\n\t<meta property=\"og:image:height\" content=\"1096\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Peer Graulich\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Peer Graulich\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"10 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/robust-ai-models-for-digital-pathology\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/robust-ai-models-for-digital-pathology\\\/\"},\"author\":{\"name\":\"Peer Graulich\",\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/#\\\/schema\\\/person\\\/2ba3adb1d375d656e9fa2d7c50cff719\"},\"headline\":\"Into the Wild: Building Robust AI Models for Digital Pathology\",\"datePublished\":\"2023-09-22T15:49:02+00:00\",\"dateModified\":\"2025-10-29T20:54:35+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/robust-ai-models-for-digital-pathology\\\/\"},\"wordCount\":2240,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/robust-ai-models-for-digital-pathology\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/wp-content\\\/uploads\\\/2023\\\/09\\\/digital-pathology-software_MICAIA-scaled.jpg\",\"keywords\":[\"AI\",\"Interview\",\"MIKAIA\u00ae\"],\"articleSection\":[\"Digital Pathology\",\"Life Science\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/robust-ai-models-for-digital-pathology\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/robust-ai-models-for-digital-pathology\\\/\",\"url\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/robust-ai-models-for-digital-pathology\\\/\",\"name\":\"Into the Wild: Building Robust AI Models for Digital Pathology - SMART SENSING insights\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/robust-ai-models-for-digital-pathology\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/robust-ai-models-for-digital-pathology\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/wp-content\\\/uploads\\\/2023\\\/09\\\/digital-pathology-software_MICAIA-scaled.jpg\",\"datePublished\":\"2023-09-22T15:49:02+00:00\",\"dateModified\":\"2025-10-29T20:54:35+00:00\",\"description\":\"Robust AI models for digital pathology depend on the quality of the underlying data. Learn more on how robustnes is ensured with MIKAIA.\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/robust-ai-models-for-digital-pathology\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/robust-ai-models-for-digital-pathology\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/robust-ai-models-for-digital-pathology\\\/#primaryimage\",\"url\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/wp-content\\\/uploads\\\/2023\\\/09\\\/digital-pathology-software_MICAIA-scaled.jpg\",\"contentUrl\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/wp-content\\\/uploads\\\/2023\\\/09\\\/digital-pathology-software_MICAIA-scaled.jpg\",\"width\":2560,\"height\":1096,\"caption\":\"Two researchers are working with digital pathology software MICAIA\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/robust-ai-models-for-digital-pathology\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Into the Wild: Building Robust AI Models for Digital Pathology\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/#website\",\"url\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/\",\"name\":\"SMART SENSING insights\",\"description\":\"learn more about our focus research areas sensor technology, electronics, and artificial intelligence\",\"publisher\":{\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/#organization\",\"name\":\"Fraunhofer IIS\",\"url\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Fraunhofer-IIS-1.png\",\"contentUrl\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Fraunhofer-IIS-1.png\",\"width\":826,\"height\":299,\"caption\":\"Fraunhofer IIS\"},\"image\":{\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/FraunhoferIIS\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/fraunhofer-iis\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/#\\\/schema\\\/person\\\/2ba3adb1d375d656e9fa2d7c50cff719\",\"name\":\"Peer Graulich\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/wp-content\\\/uploads\\\/2023\\\/10\\\/cropped-Bewerbungsfoto2-96x96.jpg4efe8007cf10a6ce6084c94a746bb68a\",\"url\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/wp-content\\\/uploads\\\/2023\\\/10\\\/cropped-Bewerbungsfoto2-96x96.jpg\",\"contentUrl\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/wp-content\\\/uploads\\\/2023\\\/10\\\/cropped-Bewerbungsfoto2-96x96.jpg\",\"caption\":\"Peer Graulich\"},\"description\":\"Peer is a technology journalism and PR student at Nuremberg Institute of Technology. With a focus on digital healthcare topics, he contributed as a working student at Fraunhofer IIS.\",\"url\":\"https:\\\/\\\/websites.fraunhofer.de\\\/smart-sensing-insights\\\/author\\\/peer-graulichiis-fraunhofer-de\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Into the Wild: Building Robust AI Models for Digital Pathology - SMART SENSING insights","description":"Robust AI models for digital pathology depend on the quality of the underlying data. Learn more on how robustnes is ensured with MIKAIA.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/robust-ai-models-for-digital-pathology\/","og_locale":"en_US","og_type":"article","og_title":"Into the Wild: Building Robust AI Models for Digital Pathology - SMART SENSING insights","og_description":"Robust AI models for digital pathology depend on the quality of the underlying data. Learn more on how robustnes is ensured with MIKAIA.","og_url":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/robust-ai-models-for-digital-pathology\/","og_site_name":"SMART SENSING insights","article_publisher":"https:\/\/www.facebook.com\/FraunhoferIIS","article_published_time":"2023-09-22T15:49:02+00:00","article_modified_time":"2025-10-29T20:54:35+00:00","og_image":[{"width":2560,"height":1096,"url":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2023\/09\/digital-pathology-software_MICAIA-scaled.jpg","type":"image\/jpeg"}],"author":"Peer Graulich","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Peer Graulich","Est. reading time":"10 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/robust-ai-models-for-digital-pathology\/#article","isPartOf":{"@id":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/robust-ai-models-for-digital-pathology\/"},"author":{"name":"Peer Graulich","@id":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/#\/schema\/person\/2ba3adb1d375d656e9fa2d7c50cff719"},"headline":"Into the Wild: Building Robust AI Models for Digital Pathology","datePublished":"2023-09-22T15:49:02+00:00","dateModified":"2025-10-29T20:54:35+00:00","mainEntityOfPage":{"@id":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/robust-ai-models-for-digital-pathology\/"},"wordCount":2240,"commentCount":0,"publisher":{"@id":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/#organization"},"image":{"@id":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/robust-ai-models-for-digital-pathology\/#primaryimage"},"thumbnailUrl":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2023\/09\/digital-pathology-software_MICAIA-scaled.jpg","keywords":["AI","Interview","MIKAIA\u00ae"],"articleSection":["Digital Pathology","Life Science"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/robust-ai-models-for-digital-pathology\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/robust-ai-models-for-digital-pathology\/","url":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/robust-ai-models-for-digital-pathology\/","name":"Into the Wild: Building Robust AI Models for Digital Pathology - SMART SENSING insights","isPartOf":{"@id":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/#website"},"primaryImageOfPage":{"@id":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/robust-ai-models-for-digital-pathology\/#primaryimage"},"image":{"@id":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/robust-ai-models-for-digital-pathology\/#primaryimage"},"thumbnailUrl":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2023\/09\/digital-pathology-software_MICAIA-scaled.jpg","datePublished":"2023-09-22T15:49:02+00:00","dateModified":"2025-10-29T20:54:35+00:00","description":"Robust AI models for digital pathology depend on the quality of the underlying data. Learn more on how robustnes is ensured with MIKAIA.","breadcrumb":{"@id":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/robust-ai-models-for-digital-pathology\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/robust-ai-models-for-digital-pathology\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/robust-ai-models-for-digital-pathology\/#primaryimage","url":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2023\/09\/digital-pathology-software_MICAIA-scaled.jpg","contentUrl":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2023\/09\/digital-pathology-software_MICAIA-scaled.jpg","width":2560,"height":1096,"caption":"Two researchers are working with digital pathology software MICAIA"},{"@type":"BreadcrumbList","@id":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/robust-ai-models-for-digital-pathology\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/"},{"@type":"ListItem","position":2,"name":"Into the Wild: Building Robust AI Models for Digital Pathology"}]},{"@type":"WebSite","@id":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/#website","url":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/","name":"SMART SENSING insights","description":"learn more about our focus research areas sensor technology, electronics, and artificial intelligence","publisher":{"@id":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/#organization","name":"Fraunhofer IIS","url":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/#\/schema\/logo\/image\/","url":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2023\/06\/Fraunhofer-IIS-1.png","contentUrl":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2023\/06\/Fraunhofer-IIS-1.png","width":826,"height":299,"caption":"Fraunhofer IIS"},"image":{"@id":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/FraunhoferIIS","https:\/\/www.linkedin.com\/company\/fraunhofer-iis"]},{"@type":"Person","@id":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/#\/schema\/person\/2ba3adb1d375d656e9fa2d7c50cff719","name":"Peer Graulich","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2023\/10\/cropped-Bewerbungsfoto2-96x96.jpg4efe8007cf10a6ce6084c94a746bb68a","url":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2023\/10\/cropped-Bewerbungsfoto2-96x96.jpg","contentUrl":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-content\/uploads\/2023\/10\/cropped-Bewerbungsfoto2-96x96.jpg","caption":"Peer Graulich"},"description":"Peer is a technology journalism and PR student at Nuremberg Institute of Technology. With a focus on digital healthcare topics, he contributed as a working student at Fraunhofer IIS.","url":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/author\/peer-graulichiis-fraunhofer-de\/"}]}},"_links":{"self":[{"href":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-json\/wp\/v2\/posts\/861","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-json\/wp\/v2\/users\/10"}],"replies":[{"embeddable":true,"href":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-json\/wp\/v2\/comments?post=861"}],"version-history":[{"count":11,"href":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-json\/wp\/v2\/posts\/861\/revisions"}],"predecessor-version":[{"id":3549,"href":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-json\/wp\/v2\/posts\/861\/revisions\/3549"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-json\/wp\/v2\/media\/903"}],"wp:attachment":[{"href":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-json\/wp\/v2\/media?parent=861"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-json\/wp\/v2\/categories?post=861"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-json\/wp\/v2\/tags?post=861"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/websites.fraunhofer.de\/smart-sensing-insights\/wp-json\/wp\/v2\/coauthors?post=861"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}