Guest blogging: Light-it and Puppeteer ethical AI practices adoption
Author: Adam Mallát
Copywriter: Maca Balparda
Light-it is a digital product company focused on healthcare product development and healthcare innovation consulting, and Puppeteer is a platform that enables any healthcare company to construct AI agents with unparalleled human-like capabilities.
As the Innovation Manager at Light-it, and together with Javier Lempert, founder and CTO at Light-it and Puppeteer, we evaluated the companies’ ethical AI adherence using the Responsible AI Maturity Model. This blog post aims to display the lessons learned from this enriching collaboration!
What drives us to adopt ethical AI practices
AI responsibility addresses an ethical issue. The concept of AI responsibility revolves around the ethical considerations and obligations associated with the development, deployment, and use of artificial intelligence (AI) and Generative Artificial Intelligence systems. As AI technology continues to advance, it brings with it a host of opportunities and challenges. One of the primary challenges is ensuring that AI is developed and utilized in a responsible manner.
At its core, AI responsibility acknowledges that AI systems have the potential to impact individuals, society, and the environment in significant ways. These impacts can range from job displacement and data privacy concerns to biases and discrimination embedded within algorithms. Therefore, addressing these ethical issues to mitigate potential harm and promote the beneficial use of AI technology becomes imperative.
AI responsibility encircles several key aspects:
Transparency: There is a need for transparency in AI systems, including the disclosure of how algorithms are developed, trained, and used. Transparency enables users and stakeholders to understand the decision-making processes of AI systems and identify potential biases or risks.
Accountability: Developers and organizations must be held accountable for the outcomes of AI systems. This involves establishing clear lines of responsibility and mechanisms for addressing any harm caused by AI technologies. Accountability encourages ethical behavior and ensures that developers consider the potential consequences of their actions.
Fairness and Bias Mitigation: AI systems can inadvertently perpetuate or amplify biases present in the data used to train them. Addressing fairness and bias requires careful consideration of the data sources, algorithmic design, and evaluation methods to ensure that AI systems treat all individuals fairly and equitably.
Privacy and Data Protection: AI systems often rely on vast amounts of data, raising concerns about privacy and data protection. Respecting individuals' privacy rights and implementing robust data protection measures are essential to AI responsibility.
Safety and Security: AI systems can pose risks to safety and security if they malfunction or are exploited by malicious actors. Ensuring the safety and security of AI technologies involves rigorous testing, monitoring, and safeguarding against potential vulnerabilities.
Human Oversight and Control: While AI systems can automate various tasks, human oversight and control are crucial to ensuring that AI remains aligned with human values and objectives. Humans should retain the ability to intervene, override decisions, and hold ultimate responsibility for the actions of AI systems.
Moreover, from a business point of view, it provides an opportunity to stay ahead of competitors, including much larger ones, thanks to its expertise in addressing safety and compliance issues. In developing AI-based applications, the competitive advantage is even larger relative to other software.
The impact of the Maturity Model addressing AI governance gap
Since there are no strict AI-specific regulations and guidelines yet, many competitors are still far behind on AI responsibility, which creates a considerable opportunity for a competitive edge. Moreover, AI responsibility is key to ensuring compliance with sector-specific regulations, such as HIPAA.
Participating in this initiative with our Digital Health Innovation Lab team at Light-it, we acquired a competitive edge by being experts at safety and compliance. AI Responsibility possesses a strong ethical value. It is our purpose at Light-it to make a positive impact on our communities and ultimately make it on people’s lives. In the healthcare sector, the stakes are especially high. For example, when patients are communicating with chatbots for mental healthcare needs, lives may be at stake.
When developing AI agents at Puppeteer, we can now configure safeguards, for example, if the AI identifies suicidal or self-harm thoughts, the chat stops, and the person is immediately referred to a human care provider or replying with a hotline number and predefined suggestions.
Evidence of Light-it and Puppeteer responsible AI practices
In evaluating ourselves, we examined our resource allocation to indicate our maturity in AI ethics. Key questions include how many workers are given the authority to work on the subject, whether their Objectives and Key Results (OKRs) address this topic, and whether they monitor quantifiable measures associated with AI ethics. Examining resource allocation as an indicator of AI ethics activity is well suited to the agile methodology we employ as a fast-paced start-up. In addition, it keeps the focus on what the company does in practice and the company’s culture. Change becomes a fact when it's encouraged by the culture of the company itself and the company’s day-to-day priorities.
For example, in the last quarter of 2023, a company's OKR was to leverage Generative AI technology to make teams more productive and efficient. Nowadays, these tools are fully integrated into the company's operations. Our next step in 2024 has been to ensure ethical applications of Generative AI technologies and take the necessary precautions to avoid ethical conflicts that bias, hallucinations, and errors may generate.
Light-it and Puppeteer learnings in AI responsibility
Evaluating ourselves using the maturity model enabled us to weigh some of the risks associated with our products more carefully, especially those related to bias. We more comprehensively understand that bias concerns exist in all AI systems, and we want to provide our digital health developers with the tools to help them recognize and mitigate these risks.
According to our CTO, one of the primary concerns Light-it and Puppeteer will address due to our evaluation is bias. We have already started working on technologies enabling developers to identify biases and tackle them to provide safe care that does not affect fairness. We are also planning on including documentation. Documents pertaining to risk management and evaluation are our top priorities.
By prioritizing AI responsibility, we aim to facilitate trust, promote inclusivity, and contribute to the responsible advancement of AI technology for the benefit of society. It is about understanding this technology's potential with caution to enhance people's quality of life genuinely. This is ultimately the purpose of every company working in the health ecosystem.