{"id":2071,"date":"2022-12-01T21:50:23","date_gmt":"2022-12-01T21:50:23","guid":{"rendered":"https:\/\/www.clouddatainsights.com\/?p=2071"},"modified":"2022-12-02T09:03:43","modified_gmt":"2022-12-02T09:03:43","slug":"6q4-seth-dobrin-from-the-responsible-ai-institute-on-attesting-that-your-app-does-no-harm","status":"publish","type":"post","link":"https:\/\/www.clouddatainsights.com\/6q4-seth-dobrin-from-the-responsible-ai-institute-on-attesting-that-your-app-does-no-harm\/","title":{"rendered":"6Q4: Responsible AI Institute’s Seth Dobrin on Attesting that Your App Does No Harm"},"content":{"rendered":"<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.clouddatainsights.com\/wp-content\/uploads\/2022\/12\/ethical-ai-Depositphotos_160300574_S.jpg\" alt=\"\" class=\"wp-image-2075\" width=\"750\" height=\"375\" srcset=\"https:\/\/www.clouddatainsights.com\/wp-content\/uploads\/2022\/12\/ethical-ai-Depositphotos_160300574_S.jpg 1000w, https:\/\/www.clouddatainsights.com\/wp-content\/uploads\/2022\/12\/ethical-ai-Depositphotos_160300574_S-300x150.jpg 300w, https:\/\/www.clouddatainsights.com\/wp-content\/uploads\/2022\/12\/ethical-ai-Depositphotos_160300574_S-768x384.jpg 768w\" sizes=\"(max-width: 750px) 100vw, 750px\" \/><figcaption class=\"wp-element-caption\"><em>Seth Dobrin, President of the Responsible AI Institute, explained the drive for organizations and software providers to confirm that they have complied with regulations or followed best practices in designing and operating AI systems.<\/em><\/figcaption><\/figure><\/div>\n\n\n<p>What is responsible AI? Is it something that we\u2019ll know when we see it? Or conversely, when we discover that AI causes harm rather than the good that was intended? <\/p>\n\n\n\n<p>There is a groundswell of demand for clarification of and protection from these <a href=\"https:\/\/www.rtinsights.com\/responsible-ai-balancing-ais-potential-with-trusted-use\/\">AI systems that are more and more involved in automating processes<\/a> that used to require human involvement to make decisions. Organizations such as ISO, governments, and the non-profit, community-based Responsible AI Institute (RAII) are responding with certifications, frameworks for affirming compliance, and guidance on how to create and operate systems that rely on AI responsibly.<\/p>\n\n\n\n<p>So, what is responsible AI? A widely accepted definition of responsible AI is AI that does not harm human health, wealth, or livelihood.<\/p>\n\n\n\n<p>CloudDataInsights recently had the chance to interview Seth Dobrin, president of RAII, ahead of the <a href=\"https:\/\/newyork.theaisummit.com\/\">AI Summit New York<\/a> on the mission, the challenges, and the opportunities to bring together experts from academic research, government agencies, standards bodies, and enterprises in a drive toward creating Responsible AI. <\/p>\n\n\n\n<p>This year, co-located with AI Summit, the <a href=\"https:\/\/www.responsible.ai\/post\/and-the-raise-award-nominees-are\">RAISE <\/a>event (in-person and virtual on December 7th, 5:30 to 7:30 p.m. EST) will recognize organizations and individuals who are at the forefront of this effort.<\/p>\n\n\n\n<p><em>Note: The interview was edited and condensed for clarity.<\/em><\/p>\n\n\n\n<p><strong>Q 1: The mission of the Responsible AI Institute, as stated on its <\/strong><a href=\"http:\/\/responsible.ai\"><strong>website<\/strong><\/a><strong>, seems very clear and obvious. Are you finding that you have to go into detail or explain what responsible AI is and why we have a responsibility to uphold it?<\/strong><\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"alignright size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.clouddatainsights.com\/wp-content\/uploads\/2022\/12\/Seth-Dobrin.jpg\" alt=\"\" class=\"wp-image-2072\" width=\"183\" height=\"183\" srcset=\"https:\/\/www.clouddatainsights.com\/wp-content\/uploads\/2022\/12\/Seth-Dobrin.jpg 732w, https:\/\/www.clouddatainsights.com\/wp-content\/uploads\/2022\/12\/Seth-Dobrin-300x300.jpg 300w, https:\/\/www.clouddatainsights.com\/wp-content\/uploads\/2022\/12\/Seth-Dobrin-150x150.jpg 150w, https:\/\/www.clouddatainsights.com\/wp-content\/uploads\/2022\/12\/Seth-Dobrin-45x45.jpg 45w\" sizes=\"(max-width: 183px) 100vw, 183px\" \/><figcaption class=\"wp-element-caption\">Seth Dobrin, President of the Responsible AI Institute<\/figcaption><\/figure><\/div>\n\n\n<p>For the most part, no one wants to create<em> ir<\/em>responsible AI. Most people want to create responsible AI, so inherently, that’s not the problem. However, some organizations struggle to make a business case for it and it gets deprioritized among all of the other things, especially in difficult times, like where we’ve been for the last few years. <\/p>\n\n\n\n<p>Although difficult times tend to push organizations harder to do things like this. So we do have to spend a bit of time with some companies, some organizations, helping them understand why it’s important, why do it now, that there are business benefits from it, and then also, how the RAI Institute is different from other for-profit organizations in this space. We are the only independent, non-profit in this space that’s approved to certify against published standards.<\/p>\n\n\n\n<p><strong>Q2: The community-based model is central to your organization. You have the innovators, the policy makers, and the businesses that are trying to conform to responsible AI standards. Do you actively have to bridge the conversation between the researchers and the enterprises?<\/strong><\/p>\n\n\n\n<p>We’re a fully member-driven organization, and community is a big part of what we do. What the RAI Institute brings forward is not just the very deep expertise of the people we have employed, but it’s also the experience and opinion of the community.<\/p>\n\n\n\n<p>The community includes academics, governments, standard bodies, and individuals that are part of corporations. Since we align to international best practices and standards, we do spend a lot of time with policy makers and regulators to help them understand the best practices and how they can validate that companies and organizations that they are, in fact, aligned to those regulations. <\/p>\n\n\n\n<p><strong>Q3: Once a Responsible AI framework is adopted, an AI audit only takes weeks which is a long way from the early days when AI was unexplainable to a certain degree or simply a black box phenomenon. What is the technology lift or investment that’s required of companies that want to certify their responsible AI?<\/strong><\/p>\n\n\n\n<p>A responsible AI audit is similar to a <a href=\"https:\/\/us.aicpa.org\/interestareas\/frc\/assuranceadvisoryservices\/serviceorganization-smanagement\">Service Organization Control 2<\/a> audit of customer data management or any other kind of audit process. Since the ultimate goal is to protect human health, wealth, or livelihood, it is more than likely going to involve some kind of certification and a third-party audit, especially once the regulations have matured. In any case, the starting point is a self-audit, a self-assessment.<\/p>\n\n\n\n<p>You can at least attest that this piece of <a href=\"http:\/\/AI\/ML\">AI-enabled software<\/a> is, in fact, responsibly implemented up to the sixth dimension (security). Then an organization whose AI usage doesn’t impact health, wealth, or livelihood, might stop at the self attestation and can declare that it’s not infringing on consumer protection, people are accountable for it, it\u2019s fair, and it’s explainable.<\/p>\n\n\n\n<p>Other organizations might want someone else to come in and validate the self-assessment, even if there\u2019s no health care, life, or welfare impact. They have decided that it’s still important to them that their customers know that they care about responsible AI. The next level is having a third-party audit of your systems which would then qualify for a certification. <\/p>\n\n\n\n<p>The space is still maturing. We\u2019re not aware of a single third-party audit that\u2019s been completed. The first one is being piloted by Canada\u2019s AI Management System Accreditation Program. The pilot is based on ISO’s AI Management System standard (ISO 42001). <\/p>\n\n\n\n<p>The RAI Institute doesn\u2019t build technology to implement AI or to perform the audits. We build the assessments called conformity assessment schemes. These are aligned to global standards, and they ingest a lot of data from the technologies at play in the Ai system. Fairness metrics, bias metrics, explainability metrics, and robustness, for example, are all part of what is ingested for conformity assessments.<\/p>\n\n\n\n<p><strong>Q4: That you are ahead of the market is very important, but in this particular instance, do you feel the governments are running ahead of the users? For example, local and national governments are getting their policies and their fines structure in place without necessarily having the tools to enforce regulations.<\/strong><\/p>\n\n\n\n<p>I don’t think that’s necessarily unusual. If you look back at GDPR, it was very similar. While policy makers seem to be ahead of organizations these days, policy makers are not ahead of what society is demanding. <\/p>\n\n\n\n<p>And so, in this case, I would argue that it’s the organizations that are behind what society is demanding, and regulators and governments are just helping to make sure that we, as individuals, as citizens of the world, have the protections that we need, that there’s appropriate accountability. That’s an important nuance\u2013yes, they’re ahead of organizations, but what if organizations were doing this on their own? The regulations probably wouldn’t be coming as fast as they are.<\/p>\n\n\n\n<p><strong>Q5: What is the RAIInstitute\u2019s number one goal in the coming year? And what challenges do you foresee?<\/strong><\/p>\n\n\n\n<p>As we move forward, our goal is to solidify itself as the de facto standard when you’re trying to assess, validate, or certify that your AI is, in fact, responsible. So essentially, enabling organizations to fast track their responsible AI efforts. That\u2019s our focus.<\/p>\n\n\n\n<p>I think that we’re focused on two different types of organizations. We’re looking at organizations that are building and buying AI and how we can help them. And we’re looking at providers that are building and supplying AI-enabled software and helping them be able to bring their tools to bear in a responsible manner. <\/p>\n\n\n\n<p>For both, we\u2019re ensuring that the AI systems are appropriately mature and they have the right policy and governance in place and understand the regulatory landscape. If companies think in that mindset, it’ll help them really know how they’re applying their tools appropriately.<\/p>\n\n\n\n<p>Our biggest challenge is confusion in the market, which thinks that software providers can get organizations what they need, when really, software tools are just one piece of the puzzle. They don\u2019t get them where they need to go. <\/p>\n\n\n\n<p>They also need standards, much like we use ISO or NIST as standards, but that are being developed for AI, and they need a way to show that they\u2019re aligned and complying with those standards.<\/p>\n\n\n\n<p> The RAI Institute’s conformity assessments demonstrate that an organization is, in fact, aligning to standards. The standard capsule of the Canadian AI management program is the only AI standard with an approved conformity assessment that exists today. Canada, ISO, IEC (ISO\/IEC 42001), and others have agreed to harmonize their standards to make it easier to validate and certify responsible AI.<\/p>\n\n\n\n<p>We’re seeing these rolling out globally over the next 12 to 18 months. Soon after that, we’ll see the EU AI Act being completed. If we extrapolate from GDPR, two years after the AI Act goes into effect is when it’ll be enforceable. So I think we’ll have two years to get certifications bodies up and running so that organizations have some way to validate that they are aligning to regulations.<\/p>\n\n\n\n<p><strong>Q6: Another point of confusion is around AI itself and what it means to enable an application with AI. Have you seen companies claim or deny that an application is using AI?<\/strong><\/p>\n\n\n\n<p>When we talk about AI, we have to keep in mind that rules have been being used for a long time in systems. In fact, if you look at New York City\u2019s <a href=\"https:\/\/ogletree.com\/insights\/new-york-citys-automated-employment-decision-tools-law-proposed-rules-are-finally-here\">automated employment decision tools (AEDT) law<\/a>, it’s not an AI tools act, it’s an automated employment act, and applies to rules-based automation as well as AI-enabled automation. From our perspective, the conformity assessment can be used for systems that use AI or rules, and that\u2019s how it should be, especially when there\u2019s an impact on health, wealth, or livelihood. <\/p>\n\n\n\n<p>Any time any kind of automated system is being used for healthcare decisions, job descriptions, or finance decisions, you need some kind of assessment of whether it is being done responsibly because rules can be just as unfair as AI. The math itself is not unfair, not biased. It’s the past decisions that humans have made that are, and those past decisions are still ingrained in rules. When I talk about AI, rules are part of it. By the way, some people refer to rules as \u201csymbolic AI.\u201d It\u2019s the automation of decision-making, whether enabled by rules or AI, that is what is giving rise to demands for responsible AI. <\/p>\n","protected":false},"excerpt":{"rendered":"<p>Seth Dobrin, President of the Responsible AI Institute, explained the drive for organizations and software providers to confirm that they have complied with regulations or followed best practices in designing and operating AI systems.<\/p>\n","protected":false},"author":44,"featured_media":2074,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"om_disable_all_campaigns":false,"_uag_custom_page_level_css":"","neve_meta_sidebar":"","neve_meta_container":"","neve_meta_enable_content_width":"","neve_meta_content_width":0,"neve_meta_title_alignment":"","neve_meta_author_avatar":"","neve_post_elements_order":"","neve_meta_disable_header":"","neve_meta_disable_footer":"","neve_meta_disable_title":"","neve_meta_reading_time":"","_themeisle_gutenberg_block_has_review":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[14],"tags":[137],"class_list":["post-2071","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-ml","tag-responsible-ai"],"acf":[],"jetpack_sharing_enabled":true,"jetpack_featured_media_url":"https:\/\/www.clouddatainsights.com\/wp-content\/uploads\/2022\/12\/ethical-ai-2-Depositphotos_160300574_S-e1669931278816.jpg","uagb_featured_image_src":{"full":["https:\/\/www.clouddatainsights.com\/wp-content\/uploads\/2022\/12\/ethical-ai-2-Depositphotos_160300574_S-e1669931278816.jpg",300,147,false],"thumbnail":["https:\/\/www.clouddatainsights.com\/wp-content\/uploads\/2022\/12\/ethical-ai-2-Depositphotos_160300574_S-e1669931278816-150x147.jpg",150,147,true],"medium":["https:\/\/www.clouddatainsights.com\/wp-content\/uploads\/2022\/12\/ethical-ai-2-Depositphotos_160300574_S-300x147.jpg",300,147,true],"medium_large":["https:\/\/www.clouddatainsights.com\/wp-content\/uploads\/2022\/12\/ethical-ai-2-Depositphotos_160300574_S-768x377.jpg",768,377,true],"large":["https:\/\/www.clouddatainsights.com\/wp-content\/uploads\/2022\/12\/ethical-ai-2-Depositphotos_160300574_S-e1669931278816.jpg",300,147,false],"1536x1536":["https:\/\/www.clouddatainsights.com\/wp-content\/uploads\/2022\/12\/ethical-ai-2-Depositphotos_160300574_S-e1669931278816.jpg",300,147,false],"2048x2048":["https:\/\/www.clouddatainsights.com\/wp-content\/uploads\/2022\/12\/ethical-ai-2-Depositphotos_160300574_S-e1669931278816.jpg",300,147,false],"alm-thumbnail":["https:\/\/www.clouddatainsights.com\/wp-content\/uploads\/2022\/12\/ethical-ai-2-Depositphotos_160300574_S-e1669931278816-150x147.jpg",150,147,true],"neve-blog":["https:\/\/www.clouddatainsights.com\/wp-content\/uploads\/2022\/12\/ethical-ai-2-Depositphotos_160300574_S-930x488.jpg",930,488,true],"rpwe-thumbnail":["https:\/\/www.clouddatainsights.com\/wp-content\/uploads\/2022\/12\/ethical-ai-2-Depositphotos_160300574_S-e1669931278816-45x45.jpg",45,45,true]},"uagb_author_info":{"display_name":"Elisabeth Strenger","author_link":"https:\/\/www.clouddatainsights.com\/author\/estrenger\/"},"uagb_comment_info":0,"uagb_excerpt":"Seth Dobrin, President of the Responsible AI Institute, explained the drive for organizations and software providers to confirm that they have complied with regulations or followed best practices in designing and operating AI systems.","_links":{"self":[{"href":"https:\/\/www.clouddatainsights.com\/wp-json\/wp\/v2\/posts\/2071"}],"collection":[{"href":"https:\/\/www.clouddatainsights.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.clouddatainsights.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.clouddatainsights.com\/wp-json\/wp\/v2\/users\/44"}],"replies":[{"embeddable":true,"href":"https:\/\/www.clouddatainsights.com\/wp-json\/wp\/v2\/comments?post=2071"}],"version-history":[{"count":8,"href":"https:\/\/www.clouddatainsights.com\/wp-json\/wp\/v2\/posts\/2071\/revisions"}],"predecessor-version":[{"id":2090,"href":"https:\/\/www.clouddatainsights.com\/wp-json\/wp\/v2\/posts\/2071\/revisions\/2090"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.clouddatainsights.com\/wp-json\/wp\/v2\/media\/2074"}],"wp:attachment":[{"href":"https:\/\/www.clouddatainsights.com\/wp-json\/wp\/v2\/media?parent=2071"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.clouddatainsights.com\/wp-json\/wp\/v2\/categories?post=2071"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.clouddatainsights.com\/wp-json\/wp\/v2\/tags?post=2071"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}