<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	
	xmlns:georss="http://www.georss.org/georss"
	xmlns:geo="http://www.w3.org/2003/01/geo/wgs84_pos#"
	>

<channel>
	<title>responsible AI Archives - CDInsights</title>
	<atom:link href="https://www.clouddatainsights.com/tag/responsible-ai/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.clouddatainsights.com/tag/responsible-ai/</link>
	<description>Trsanform Your Business in a Cloud Data World</description>
	<lastBuildDate>Fri, 24 Mar 2023 20:27:50 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.6.1</generator>

 
<site xmlns="com-wordpress:feed-additions:1">207802051</site>	<item>
		<title>The 3 Steps to Make Responsible AI a Reality</title>
		<link>https://www.clouddatainsights.com/the-3-steps-to-make-responsible-ai-a-reality/</link>
					<comments>https://www.clouddatainsights.com/the-3-steps-to-make-responsible-ai-a-reality/#respond</comments>
		
		<dc:creator><![CDATA[Zachariah Eslami]]></dc:creator>
		<pubDate>Fri, 24 Mar 2023 01:23:19 +0000</pubDate>
				<category><![CDATA[AI/ML]]></category>
		<category><![CDATA[responsible AI]]></category>
		<guid isPermaLink="false">https://www.clouddatainsights.com/?p=2493</guid>

					<description><![CDATA[Establish a Responsible AI practice to maximize the potential of AI for your business and customers.]]></description>
										<content:encoded><![CDATA[<div class="wp-block-image">
<figure class="aligncenter size-full is-resized"><img fetchpriority="high" decoding="async" src="https://www.clouddatainsights.com/wp-content/uploads/2023/03/responsible-ai-2-Depositphotos_181478442_S.jpg" alt="" class="wp-image-2496" width="742" height="482" srcset="https://www.clouddatainsights.com/wp-content/uploads/2023/03/responsible-ai-2-Depositphotos_181478442_S.jpg 989w, https://www.clouddatainsights.com/wp-content/uploads/2023/03/responsible-ai-2-Depositphotos_181478442_S-300x195.jpg 300w, https://www.clouddatainsights.com/wp-content/uploads/2023/03/responsible-ai-2-Depositphotos_181478442_S-768x499.jpg 768w" sizes="(max-width: 742px) 100vw, 742px" /><figcaption class="wp-element-caption"><em>Establish a Responsible AI practice to maximize the potential of AI for your business and customers.</em></figcaption></figure></div>


<p>In the wake of the recent attention on large language models with Chat-GPT, LLaMA, and others, the AI space race has been kicked into overdrive. These models demonstrate tangible, practical use cases currently centered around search that seemingly provide immediate value for users.</p>



<p><a href="https://www.theverge.com/2023/3/15/23640180/openai-gpt-4-launch-closed-research-ilya-sutskever-interview" target="_blank" rel="noreferrer noopener">Recent concerns</a> from OpenAI’s Chief Scientist about the availability of these models in an open-source ecosystem, combined with <a href="https://cybernews.com/tech/chatgpts-bard-ai-answers-hallucination/" target="_blank" rel="noreferrer noopener">ongoing hallucinations</a> from these newly available generative AI models, underscore the immaturity of the AI ecosystem and the caveat to seemingly realized value from AI model output. While ROI from AI is typically quantified via cost savings, revenue generation, insight discovery, etc., it&#8217;s important to keep in mind that artificially intelligent applications need to be leveraged with a system of Responsible AI checks and balances. Mitigating bias, establishing trust and transparency in data lineage, and creating processes to monitor the outcome of model inference ensure more scalable long-term benefits from AI.</p>



<p>We offer this prescriptive guide here for companies who are starting or already have started their AI journey to establish Responsible AI principles and practices.</p>



<p><strong>See also: </strong><a href="https://www.clouddatainsights.com/6q4-seth-dobrin-from-the-responsible-ai-institute-on-attesting-that-your-app-does-no-harm/" target="_blank" rel="noreferrer noopener">6Q4: Responsible AI Institute’s Seth Dobrin on Attesting that Your App Does No Harm</a></p>



<h2 class="wp-block-heading">State of the Market</h2>



<p>Businesses see AI as a gateway to strategic advantage in the market and are looking for opportunities to institute this technology into their existing workflows. <a href="https://www.forbes.com/sites/louiscolumbus/2017/09/10/how-artificial-intelligence-is-revolutionizing-business-in-2017/?sh=62d9e9445463" target="_blank" rel="noreferrer noopener">BCG and MIT conducted a survey of global business</a> leaders and found that 83% of executives view AI as an immediate strategic priority, and 75% of respondents believe AI will help them move into new businesses and ventures. AI has become a ubiquitous part of the modern analytics stack and a crucial CapEx investment to help businesses foster innovation at faster speeds.</p>



<p>However, despite a <a href="https://spectrum.ieee.org/artificial-intelligence-index#toggle-gdpr">163% increase</a> in global AI investment from 2019 to 2021, projected to be <a href="https://www.idc.com/getdoc.jsp?containerId=prUS48881422">$500B in 2023</a>, over <a href="https://www.infoworld.com/article/3639028/why-ai-investments-fail-to-deliver.html">85% of AI projects</a> are not delivering value to businesses or consumers, namely because of challenges related to:</p>



<ol class="wp-block-list" type="1" start="1">
<li><strong>Purchase Expectations</strong> (and time to value): Misled positioning in terms of the function and implementation of AI to solve a core business problem can lead to unmet expectations for ROI</li>



<li><strong>Improper Build Practices</strong>: The development process for AI can often focus too heavily on the model lifecycle vs. being human, and data centric; teams are misaligned, using different interpretations of data, and are not optimized for long-term success</li>



<li><strong>Consumption Deficiencies</strong>: Model deployment often takes too long because of 1 and 2, and as a result, tends to rush businesses to disregard the mechanism by which model output can be made understandable, trustworthy, secure, and unbiased</li>
</ol>



<p>So a lot of AI projects &#8211; <a href="https://www.gartner.com/en/newsroom/press-releases/2020-10-19-gartner-identifies-the-top-strategic-technology-trends-for-2021">over 50%</a> &#8211; fail to make it into a production environment and are oftentimes not scalable or responsibly considering the downstream impact.</p>



<p><strong>See also: </strong><a href="https://www.rtinsights.com/nist-ai-bias-is-goes-way-beyond-data/" target="_blank" rel="noreferrer noopener">NIST: AI Bias Goes Way Beyond Data</a></p>



<h2 class="wp-block-heading">A Responsible AI Framework</h2>



<p>We put together this simple framework to help you institute Responsible AI practices in your organization to make sure your AI projects are successful for you and your customers:</p>



<figure class="wp-block-image size-large"><img decoding="async" width="1024" height="295" src="https://www.clouddatainsights.com/wp-content/uploads/2023/03/responsible-ai-1024x295.jpg" alt="" class="wp-image-2494" srcset="https://www.clouddatainsights.com/wp-content/uploads/2023/03/responsible-ai-1024x295.jpg 1024w, https://www.clouddatainsights.com/wp-content/uploads/2023/03/responsible-ai-300x86.jpg 300w, https://www.clouddatainsights.com/wp-content/uploads/2023/03/responsible-ai-768x221.jpg 768w, https://www.clouddatainsights.com/wp-content/uploads/2023/03/responsible-ai.jpg 1133w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<p></p>



<p><strong>Step 1</strong>: <strong>Problems AI can Solve</strong></p>



<p><strong>Education</strong> &#8211; Understand what your operational and analytical data tell you and the capacity of AI to address the problem</p>



<p><strong>People/Process</strong> &#8211; Ensure team alignment to an objective, and take a human-centered design approach to consider multiple roles, backgrounds, and perspectives</p>



<p>&nbsp;<strong>Technology</strong> &#8211; Instill effective mechanisms to measure consistently and accurately</p>



<p><strong>Step 2</strong>: <strong>Principles for Consistency</strong></p>



<p><strong>Education</strong> &#8211; Give consumers insight into how, where, and for what their data is being used when AI is in the picture</p>



<p><strong>People/Process</strong> &#8211; Create boards, committees, and centers of excellence to help drive similar approaches across an organization</p>



<p><strong>Technology</strong> &#8211; Look for agnostic tools that can work with existing cloud and data warehouse platforms to optimize the speed of implementation, developer productivity, and compute resource spend</p>



<p><strong>Step 3: Measures of Success</strong></p>



<p><strong>Education</strong> &#8211; Learn about the effectiveness of AI in the application by asking end consumers</p>



<p><strong>People/Process</strong> &#8211; Create regular cadences to ensure transparency, inclusivity, and reliability of the model in addressing the core business challenge</p>



<p><strong>Technology</strong> &#8211; Leverage TrustedAI platforms, MLOps, and existing BI tools to mitigate risk, model performance, and create simplistic views to measure</p>



<p>It&#8217;s never too early to start &#8211; creating not just an AI practice but a Responsible AI practice leads to more successful AI projects that drive business value. Companies that are implementing Responsible AI agree on the benefits:</p>



<ul class="nv-cv-m wp-block-list">
<li>50% of leaders agree their products and services have improved</li>



<li>48% of leaders have created brand differentiation</li>



<li>43% of leaders have seen an increase in innovation</li>
</ul>



<p class="has-text-align-center"><em>Forty-one percent of RAI leaders confirmed they have realized some measurable business benefit compared to only 14% of companies less invested in RAI.</em>   &#8211; <a href="https://sloanreview.mit.edu/projects/to-be-a-responsible-ai-leader-focus-on-being-responsible/">MIT Study on Responsible AI</a></p>



<p>While still in its infancy, Responsible AI has proven to be successful for businesses investing in AI, leading to innovation and alignment to other strategic objectives:</p>



<ul class="nv-cv-m wp-block-list">
<li>eBay has used Responsible AI principles to educate its employees about the importance of corporate social responsibility</li>



<li>Levi Strauss has used Responsible AI principles to create a consistent process for the implementation of AI across multiple business units.</li>



<li>H&amp;M has used Responsible AI principles to help measure and drive ambitions for environmental sustainability&nbsp;</li>
</ul>



<p>Responsible AI practices mean a successful AI practice. A successful AI practice enables the scale of AI projects. More responsible AI means more business value.</p>
<div class="saboxplugin-wrap" itemtype="http://schema.org/Person" itemscope itemprop="author"><div class="saboxplugin-tab"><div class="saboxplugin-gravatar"><img decoding="async" src="https://www.clouddatainsights.com/wp-content/uploads/2023/03/zach-eslami.jpg" width="100"  height="100" alt="" itemprop="image"></div><div class="saboxplugin-authorname"><a href="https://www.clouddatainsights.com/author/zachariah-eslami/" class="vcard author" rel="author"><span class="fn">Zachariah Eslami</span></a></div><div class="saboxplugin-desc"><div itemprop="description"><p>Zachariah Eslami is currently Director of Product Management &#8211; AI/ML at AtScale. He is passionate about the application of data science and AI to help solve key challenges with data, and works with customers to help them develop and realize value from AI models. Prior to his time at AtScale, Zach worked as a Speech ASR/NLP Product Manager at Rev.ai, managed a team of solution engineers at IBM, and helped to productize the IBM Watson AI core application platform. Zach currently lives in Austin, Texas, in his free time, he enjoys playing tennis, photography, and playing piano.</p>
</div></div><div class="clearfix"></div></div></div>]]></content:encoded>
					
					<wfw:commentRss>https://www.clouddatainsights.com/the-3-steps-to-make-responsible-ai-a-reality/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2493</post-id>	</item>
		<item>
		<title>6Q4: Responsible AI Institute&#8217;s Seth Dobrin on Attesting that Your App Does No Harm</title>
		<link>https://www.clouddatainsights.com/6q4-seth-dobrin-from-the-responsible-ai-institute-on-attesting-that-your-app-does-no-harm/</link>
					<comments>https://www.clouddatainsights.com/6q4-seth-dobrin-from-the-responsible-ai-institute-on-attesting-that-your-app-does-no-harm/#respond</comments>
		
		<dc:creator><![CDATA[Elisabeth Strenger]]></dc:creator>
		<pubDate>Thu, 01 Dec 2022 21:50:23 +0000</pubDate>
				<category><![CDATA[AI/ML]]></category>
		<category><![CDATA[responsible AI]]></category>
		<guid isPermaLink="false">https://www.clouddatainsights.com/?p=2071</guid>

					<description><![CDATA[Seth Dobrin, President of the Responsible AI Institute, explained the drive for organizations and software providers to confirm that they have complied with regulations or followed best practices in designing and operating AI systems.]]></description>
										<content:encoded><![CDATA[<div class="wp-block-image">
<figure class="aligncenter size-full is-resized"><img loading="lazy" decoding="async" src="https://www.clouddatainsights.com/wp-content/uploads/2022/12/ethical-ai-Depositphotos_160300574_S.jpg" alt="" class="wp-image-2075" width="750" height="375" srcset="https://www.clouddatainsights.com/wp-content/uploads/2022/12/ethical-ai-Depositphotos_160300574_S.jpg 1000w, https://www.clouddatainsights.com/wp-content/uploads/2022/12/ethical-ai-Depositphotos_160300574_S-300x150.jpg 300w, https://www.clouddatainsights.com/wp-content/uploads/2022/12/ethical-ai-Depositphotos_160300574_S-768x384.jpg 768w" sizes="(max-width: 750px) 100vw, 750px" /><figcaption class="wp-element-caption"><em>Seth Dobrin, President of the Responsible AI Institute, explained the drive for organizations and software providers to confirm that they have complied with regulations or followed best practices in designing and operating AI systems.</em></figcaption></figure></div>


<p>What is responsible AI? Is it something that we’ll know when we see it? Or conversely, when we discover that AI causes harm rather than the good that was intended?&nbsp;</p>



<p>There is a groundswell of demand for clarification of and protection from these <a href="https://www.rtinsights.com/responsible-ai-balancing-ais-potential-with-trusted-use/">AI systems that are more and more involved in automating processes</a> that used to require human involvement to make decisions. Organizations such as ISO, governments, and the non-profit, community-based Responsible AI Institute (RAII) are responding with certifications, frameworks for affirming compliance, and guidance on how to create and operate systems that rely on AI responsibly.</p>



<p>So, what is responsible AI? A widely accepted definition of responsible AI is AI that does not harm human health, wealth, or livelihood.</p>



<p>CloudDataInsights recently had the chance to interview Seth Dobrin, president of RAII, ahead of the <a href="https://newyork.theaisummit.com/">AI Summit New York</a> on the mission, the challenges, and the opportunities to bring together experts from academic research, government agencies, standards bodies, and enterprises in a drive toward creating Responsible AI.&nbsp;</p>



<p>This year, co-located with AI Summit, the <a href="https://www.responsible.ai/post/and-the-raise-award-nominees-are">RAISE </a>event (in-person and virtual on December 7th, 5:30 to 7:30 p.m. EST) will recognize organizations and individuals who are at the forefront of this effort.</p>



<p><em>Note: The interview was edited and condensed for clarity.</em></p>



<p><strong>Q 1: The mission of the Responsible AI Institute, as stated on its </strong><a href="http://responsible.ai"><strong>website</strong></a><strong>, seems very clear and obvious. Are you finding that you have to go into detail or explain what responsible AI is and why we have a responsibility to uphold it?</strong></p>


<div class="wp-block-image">
<figure class="alignright size-full is-resized"><img loading="lazy" decoding="async" src="https://www.clouddatainsights.com/wp-content/uploads/2022/12/Seth-Dobrin.jpg" alt="" class="wp-image-2072" width="183" height="183" srcset="https://www.clouddatainsights.com/wp-content/uploads/2022/12/Seth-Dobrin.jpg 732w, https://www.clouddatainsights.com/wp-content/uploads/2022/12/Seth-Dobrin-300x300.jpg 300w, https://www.clouddatainsights.com/wp-content/uploads/2022/12/Seth-Dobrin-150x150.jpg 150w, https://www.clouddatainsights.com/wp-content/uploads/2022/12/Seth-Dobrin-45x45.jpg 45w" sizes="(max-width: 183px) 100vw, 183px" /><figcaption class="wp-element-caption">Seth Dobrin, President of the Responsible AI Institute</figcaption></figure></div>


<p>For the most part, no one wants to create<em> ir</em>responsible AI. Most people want to create responsible AI, so inherently, that&#8217;s not the problem. However, some organizations struggle to make a business case for it and it gets deprioritized among all of the other things, especially in difficult times, like where we&#8217;ve been for the last few years.&nbsp;</p>



<p>Although difficult times tend to push organizations harder to do things like this. So we do have to spend a bit of time with some companies, some organizations, helping them understand why it&#8217;s important, why do it now, that there are business benefits from it, and then also, how the RAI Institute is different from other for-profit organizations in this space. We are the only independent, non-profit in this space that&#8217;s approved to certify against published standards.</p>



<p><strong>Q2: The community-based model is central to your organization. You have the innovators, the policy makers, and the businesses that are trying to conform to responsible AI standards. Do you actively have to bridge the conversation between the researchers and the enterprises?</strong></p>



<p>We&#8217;re a fully member-driven organization, and community is a big part of what we do. What the RAI Institute brings forward is not just the very deep expertise of the people we have employed, but it&#8217;s also the experience and opinion of the community.</p>



<p>The community includes academics, governments, standard bodies, and individuals that are part of corporations. Since we align to international best practices and standards, we do spend a lot of time with policy makers and regulators to help them understand the best practices and how they can validate that companies and organizations that they are, in fact, aligned to those regulations.&nbsp;</p>



<p><strong>Q3: Once a Responsible AI framework is adopted, an AI audit only takes weeks which is a long way from the early days when AI was unexplainable to a certain degree or simply a black box phenomenon. What is the technology lift or investment that&#8217;s required of companies that want to certify their responsible AI?</strong></p>



<p>A responsible AI audit is similar to a <a href="https://us.aicpa.org/interestareas/frc/assuranceadvisoryservices/serviceorganization-smanagement">Service Organization Control 2</a> audit of customer data management or any other kind of audit process. Since the ultimate goal is to protect human health, wealth, or livelihood, it is more than likely going to involve some kind of certification and a third-party audit, especially once the regulations have matured. In any case, the starting point is a self-audit, a self-assessment.</p>



<p>You can at least attest that this piece of <a href="http://AI/ML">AI-enabled software</a> is, in fact, responsibly implemented up to the sixth dimension (security). Then an organization whose AI usage doesn&#8217;t impact health, wealth, or livelihood, might stop at the self attestation and can declare that it&#8217;s not infringing on consumer protection, people are accountable for it, it’s fair, and it&#8217;s explainable.</p>



<p>Other organizations might want someone else to come in and validate the self-assessment, even if there’s no health care, life, or welfare impact. They have decided that it&#8217;s still important to them that their customers know that they care about responsible AI. The next level is having a third-party audit of your systems which would then qualify for a certification.&nbsp;</p>



<p>The space is still maturing. We’re not aware of a single third-party audit that’s been completed. The first one is being piloted by Canada’s AI Management System Accreditation Program. The pilot is based on ISO&#8217;s AI Management System standard (ISO 42001).&nbsp;</p>



<p>The RAI Institute doesn’t build technology to implement AI or to perform the audits. We build the assessments called conformity assessment schemes. These are aligned to global standards, and they ingest a lot of data from the technologies at play in the Ai system.&nbsp; Fairness metrics, bias metrics, explainability metrics, and robustness, for example, are all part of what is ingested for conformity assessments.</p>



<p><strong>Q4: That you are ahead of the market is very important, but in this particular instance, do you feel the governments are running ahead of the users? For example, local and national governments are getting their policies and their fines structure in place without necessarily having the tools to enforce regulations.</strong></p>



<p>I don&#8217;t think that&#8217;s necessarily unusual. If you look back at GDPR, it was very similar. While policy makers seem to be ahead of organizations these days, policy makers are not ahead of what society is demanding.&nbsp;</p>



<p>And so, in this case, I would argue that it&#8217;s the organizations that are behind what society is demanding, and regulators and governments are just helping to make sure that we, as individuals, as citizens of the world, have the protections that we need, that there&#8217;s appropriate accountability. That&#8217;s an important nuance–yes, they&#8217;re ahead of organizations, but what if organizations were doing this on their own? The regulations probably wouldn&#8217;t be coming as fast as they are.</p>



<p><strong>Q5: What is the RAIInstitute’s number one goal in the coming year? And what challenges do you foresee?</strong></p>



<p>As we move forward, our goal is to solidify itself as the de facto standard when you&#8217;re trying to assess, validate, or certify that your AI is, in fact, responsible. So essentially, enabling organizations to fast track their responsible AI efforts. That’s our focus.</p>



<p>I think that we&#8217;re focused on two different types of organizations. We&#8217;re looking at organizations that are building and buying AI and how we can help them. And we&#8217;re looking at providers that are building and supplying AI-enabled software and helping them be able to bring their tools to bear in a responsible manner.&nbsp;</p>



<p>For both, we’re ensuring that the AI systems are appropriately mature and they have the right policy and governance in place and understand the regulatory landscape. If companies think in that mindset, it&#8217;ll help them really know how they&#8217;re applying their tools appropriately.</p>



<p>Our biggest challenge is confusion in the market, which thinks that software providers can get organizations what they need, when really, software tools are just one piece of the puzzle. They don’t get them where they need to go.&nbsp;</p>



<p>They also need standards, much like we use ISO or NIST as standards, but that are being developed for AI, and they need a way to show that they’re aligned and complying with those standards.</p>



<p>&nbsp;The RAI Institute&#8217;s conformity assessments demonstrate that an organization is, in fact, aligning to standards. The standard capsule of the Canadian AI management program is the only AI standard with an approved conformity assessment that exists today. Canada, ISO, IEC (ISO/IEC 42001), and others have agreed to harmonize their standards to make it easier to validate and certify responsible AI.</p>



<p>We&#8217;re seeing these rolling out globally over the next 12 to 18 months. Soon after that, we&#8217;ll see the EU AI Act being completed. If we extrapolate from GDPR, two years after the AI Act goes into effect is when it&#8217;ll be enforceable. So I think we&#8217;ll have two years to get certifications bodies up and running so that organizations have some way to validate that they are aligning to regulations.</p>



<p><strong>Q6: Another point of confusion is around AI itself and what it means to enable an application with AI. Have you seen companies claim or deny that an application is using AI?</strong></p>



<p>When we talk about AI, we have to keep in mind that rules have been being used for a long time in systems. In fact, if you look at New York City’s <a href="https://ogletree.com/insights/new-york-citys-automated-employment-decision-tools-law-proposed-rules-are-finally-here">automated employment decision tools (AEDT) law</a>, it&#8217;s not an AI tools act, it&#8217;s an automated employment act, and applies to rules-based automation as well as AI-enabled automation. From our perspective, the conformity assessment can be used for systems that use AI or rules, and that’s how it should be, especially when there’s an impact on health, wealth, or livelihood.&nbsp;</p>



<p>Any time any kind of automated system is being used for healthcare decisions, job descriptions, or finance decisions, you need some kind of assessment of whether it is being done responsibly because rules can be just as unfair as AI. The math itself is not unfair, not biased. It&#8217;s the past decisions that humans have made that are, and those past decisions are still ingrained in rules. When I talk about AI, rules are part of it. By the way, some people refer to rules as “symbolic AI.” It’s the automation of decision-making, whether enabled by rules or AI, that is what is giving rise to demands for responsible AI.&nbsp;</p>
<div class="saboxplugin-wrap" itemtype="http://schema.org/Person" itemscope itemprop="author"><div class="saboxplugin-tab"><div class="saboxplugin-gravatar"><img alt='Elisabeth Strenger' src='https://secure.gravatar.com/avatar/d42bdc4339b8a684f54ad42d3ac0accb?s=100&#038;d=mm&#038;r=g' srcset='https://secure.gravatar.com/avatar/d42bdc4339b8a684f54ad42d3ac0accb?s=200&#038;d=mm&#038;r=g 2x' class='avatar avatar-100 photo' height='100' width='100' itemprop="image"/></div><div class="saboxplugin-authorname"><a href="https://www.clouddatainsights.com/author/estrenger/" class="vcard author" rel="author"><span class="fn">Elisabeth Strenger</span></a></div><div class="saboxplugin-desc"><div itemprop="description"><p>Elisabeth Strenger is a Senior Technology Writer at <a href="https://www.clouddatainsights.com/">CDInsights.ai</a>.</p>
</div></div><div class="clearfix"></div></div></div>]]></content:encoded>
					
					<wfw:commentRss>https://www.clouddatainsights.com/6q4-seth-dobrin-from-the-responsible-ai-institute-on-attesting-that-your-app-does-no-harm/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2071</post-id>	</item>
	</channel>
</rss>