CERT-In has highlighted several vulnerabilities that exist within AI systems. These include threats such as data poisoning, which manipulates the input data used to train AI models, and adversarial attacks, where malicious entities deliberately feed deceptive inputs to mislead AI systems. Another alarming concern is prompt injection, a method used to tamper with AI prompts to extract sensitive or unintended outputs.
Moreover, cybercriminals are exploiting the rising popularity of AI by developing fake AI applications. These apps often disguise themselves as legitimate tools but are embedded with malware, posing significant risks to user data and device security. Such tactics underscore the urgent need for heightened vigilance when interacting with AI-driven platforms.
Recognizing these risks, CERT-In has issued practical recommendations to help users safeguard their data and privacy.
➡Download AI applications only from trusted sources and avoid sharing sensitive personal information through these platforms.
➡When using AI services, consider creating anonymous accounts to limit exposure of personal details.
➡Rely on AI tools only for their intended purposes and exercise caution when using them in critical domains like healthcare or legal analysis.
The immense potential of AI requires responsible adoption and proactive cybersecurity practices. Staying informed about threats is essential for users and organizations to harness AI's benefits while mitigating risks. CERT-In’s advisory emphasizes the importance of caution and informed use to ensure a safer digital future.