Saturday, June 15, 2024
HomeBig DataExploring Accountable AI with Ravit Dotan

Exploring Accountable AI with Ravit Dotan

In our newest episode of Main with Knowledge, we had the privilege of talking with Ravit Dotan, a famend knowledgeable in AI ethics. Ravit Dotan’s various background, together with a PhD in philosophy from UC Berkeley and her management in AI ethics at, uniquely positions her to supply profound insights into accountable AI practices. All through our dialog, Ravit emphasised the significance of integrating accountable AI issues from the inception of product growth. She shared sensible methods for startups, mentioned the importance of steady ethics evaluations, and highlighted the vital position of public engagement in refining AI approaches. Her insights present a roadmap for companies aiming to navigate the complicated panorama of AI accountability.

You possibly can take heed to this episode of Main with Knowledge on in style platforms like SpotifyGoogle Podcasts, and Apple. Decide your favourite to benefit from the insightful content material!

Key Insights from our Dialog with Ravit Dotan

  • Accountable AI must be thought-about from the beginning of product growth, not postponed till later phases.
  • Partaking in group workouts to debate AI dangers can elevate consciousness and result in extra accountable AI practices.
  • Ethics evaluations must be carried out at each stage of function growth to evaluate dangers and advantages.
  • Testing for bias is essential, even when a function like gender is just not explicitly included within the AI mannequin.
  • The selection of AI platform can considerably affect the extent of discrimination within the system, so it’s essential to check and think about accountability elements when deciding on a basis to your know-how.
  • Adapting to modifications in enterprise fashions or use circumstances might require altering the metrics used to measure bias, and firms must be ready to embrace these modifications.
  • Public engagement and knowledgeable session can assist firms refine their method to accountable AI and tackle broader points.

Be a part of our upcoming Main with Knowledge periods for insightful discussions with AI and Knowledge Science leaders!

Let’s look into the small print of our dialog with Ravit Dotan!

What’s the most dystopian situation you possibly can think about with AI?

Because the CEO of TechBetter, I’ve contemplated deeply in regards to the potential dystopian outcomes of AI. Essentially the most troubling situation for me is the proliferation of disinformation. Think about a world the place we are able to not depend on something we discover on-line, the place even scientific papers are riddled with misinformation generated by AI. This might erode our belief in science and dependable data sources, leaving us in a state of perpetual uncertainty and skepticism.

How did you transition into the sector of accountable AI?

My journey into accountable AI started throughout my PhD in philosophy at UC Berkeley, the place I specialised in epistemology and philosophy of science. I used to be intrigued by the inherent values shaping science and observed parallels in machine studying, which was typically touted as value-free and goal. With my background in tech and a need for constructive social affect, I made a decision to use the teachings from philosophy to the burgeoning subject of AI, aiming to detect and productively use the embedded social and political values.

What does accountable AI imply to you?

Accountable AI, to me, is just not in regards to the AI itself however the folks behind it – those that create, use, purchase, spend money on, and insure it. It’s about creating and deploying AI with a eager consciousness of its social implications, minimizing dangers, and maximizing advantages. In a tech firm, accountable AI is the end result of accountable growth processes that think about the broader social context.

When ought to startups start to contemplate accountable AI?

Startups ought to take into consideration accountable AI from the very starting. Delaying this consideration solely complicates issues afterward. Addressing accountable AI early on permits you to combine these issues into your enterprise mannequin, which might be essential for gaining inside buy-in and making certain engineers have the assets to deal with responsibility-related duties.

How can startups method accountable AI?

Startups can start by figuring out widespread dangers utilizing frameworks just like the AI RMF from NIST. They need to think about how their target market and firm may very well be harmed by these dangers and prioritize accordingly. Partaking in group workouts to debate these dangers can elevate consciousness and result in a extra accountable method. It’s additionally important to tie in enterprise affect to make sure ongoing dedication to accountable AI practices.

What are the trade-offs between specializing in product growth and accountable AI?

I don’t see it as a trade-off. Addressing accountable AI can truly propel an organization ahead by allaying client and investor considerations. Having a plan for accountable AI can support in market match and reveal to stakeholders that the corporate is proactive in mitigating dangers.

How do completely different firms method the discharge of doubtless dangerous AI options?

Corporations range of their method. Some, like OpenAI, launch merchandise and iterate rapidly upon figuring out shortcomings. Others, like Google, might maintain again releases till they’re extra sure in regards to the mannequin’s conduct. The most effective apply is to conduct an Ethics evaluation at each stage of function growth to weigh the dangers and advantages and determine whether or not to proceed.

Are you able to share an instance the place contemplating accountable AI modified a product or function?

A notable instance is Amazon’s scrapped AI recruitment instrument. After discovering the system was biased towards girls, regardless of not having gender as a function, Amazon selected to desert the challenge. This determination possible saved them from potential lawsuits and reputational injury. It underscores the significance of testing for bias and contemplating the broader implications of AI programs.

How ought to firms deal with the evolving nature of AI and the metrics used to measure bias?

Corporations have to be adaptable. If a major metric for measuring bias turns into outdated resulting from modifications within the enterprise mannequin or use case, they should swap to a extra related metric. It’s an ongoing journey of enchancment, the place firms ought to begin with one consultant metric, measure, and enhance upon it, after which iterate to deal with broader points.

Whereas I don’t categorize instruments strictly as open supply or proprietary when it comes to accountable AI, it’s essential for firms to contemplate the AI platform they select. Completely different platforms might have various ranges of inherent discrimination, so it’s important to check and keep in mind the accountability elements when deciding on the inspiration to your know-how.

What recommendation do you’ve for firms dealing with the necessity to change their bias measurement metrics?

Embrace the change. Simply as in different fields, typically a shift in metrics is unavoidable. It’s essential to start out someplace, even when it’s not good, and to view it as an incremental enchancment course of. Partaking with the general public and specialists by means of hackathons or purple teaming occasions can present helpful insights and assist refine the method to accountable AI.


Our enlightening dialogue with Ravit Dotan underscored the important want for accountable AI practices in immediately’s quickly evolving technological panorama. By incorporating moral issues from the beginning, participating in group workouts to grasp AI dangers, and adapting to altering metrics, firms can higher handle the social implications of their applied sciences.

Ravit’s views, drawn from her intensive expertise and philosophical experience, stress the significance of steady ethics evaluations and public engagement. As AI continues to form our future, the insights from leaders like Ravit Dotan are invaluable in guiding firms to develop applied sciences that aren’t solely modern but additionally socially accountable and ethically sound.

For extra participating periods on AI, information science, and GenAI, keep tuned with us on Main with Knowledge.

Verify our upcoming periods right here.



Please enter your comment!
Please enter your name here

Most Popular

Recent Comments