Infosys has introduced an open-source Responsible AI Toolkit as part of its Infosys Topaz Responsible AI Suite, aiming to enhance transparency, security, and trust in artificial intelligence systems. The toolkit provides enterprises with defensive technical guardrails to address AI-related risks, including privacy breaches, security threats, biased outputs, hallucinations, and deepfakes.
The Responsible AI Toolkit is based on Infosys’ AI3S framework (Scan, Shield, and Steer), which helps businesses detect and mitigate risks associated with AI adoption. The solution enhances model transparency by providing explanations for AI-generated outputs while maintaining performance efficiency. The open-source nature of the toolkit allows customization and seamless integration across cloud and on-premise environments, making it adaptable for various industries.
“As AI becomes central to enterprise growth, ethical adoption is no longer optional,” said Balakrishna D. R., executive vice president and global services head, AI and industry verticals at Infosys. “By making the Responsible AI Toolkit open source, we are fostering a collaborative ecosystem to address AI bias, security, and transparency challenges.”
Deutsche Telekom taps Google Cloud for AI-driven network automation
Industry leaders have acknowledged Infosys’ initiative as a step toward safer and more accountable AI practices.
- Joshua Bamford, head of science, technology, and innovation at the British High Commission, praised Infosys’ decision to go open source, calling it a benchmark for responsible AI development.
- Sunil Abraham, public policy director for data economy and emerging tech at Meta, highlighted the importance of open-source tools in ensuring AI safety and accessibility for a broader spectrum of developers.
- Abhishek Singh, additional secretary at India’s Ministry of Electronics and Information Technology (MeitY), noted that the toolkit would be instrumental in enhancing security, privacy, and fairness in AI models, particularly for startups and AI developers.
Infosys has strengthened its Responsible AI initiatives through the launch of its Responsible AI Office and participation in global AI safety efforts. The company is among the first to achieve ISO 42001:2023 certification for AI management systems and is actively involved in industry coalitions such as the NIST AI Safety Institute Consortium, AI Alliance, and Stanford HAI.
Featured image credit: Infosys