Kaspersky sets bar with ethical principles for AI, ML in cybersecurity

Kaspersky sets bar with ethical principles for AI, ML in cybersecurity

Cybersecurity company Kaspersky has introduced a comprehensive set of ethical principles for the use of artificial intelligence (AI) and machine learning (ML). These principles were unveiled at the UN Internet Governance Forum in Kyoto, Japan, and emphasized dedication of the company to responsible and transparent technological advancements. With nearly two decades of experience in the cybersecurity field, it has successfully integrated ML algorithms, a subset of AI, into its solutions.

Kaspersky’s six ethical principles for AI and ML in cybersecurity set an industry standard. They prioritize transparency, urging companies to openly inform customers about AI/ML integration. Safety is paramount with rigorous security audits, reduced reliance on third-party datasets during training and a preference for secure cloud-based ML technologies.

Kaspersky commits to maintaining human control as a key element of AI/ML systems to protect against evolving threats including Advanced Persistent Threats (APTs). Privacy is a top priority with stringent measures to safeguard user data and uphold privacy rights. Additionally, the company dedicates its AI/ML systems solely to defensive purposes in line with its mission to create a safer world. Openness to dialogue is essential. It is encouraging collaboration to promote ethical AI practices, recognizing the importance of ongoing stakeholder dialogue for innovation and progress.

Kaspersky’s CTO, Anton Ivanov, highlighted AI’s potential benefits and acknowledged associated risks in cybersecurity. He stressed the importance of sharing ethical guidelines and fostering an industry-wide dialogue for responsible AI and ML development. These principles extend the company’s Global Transparency Initiative that is aimed at promoting transparency and accountability among technology providers for a more secure digital world.