Government Restricts Use of AI Tools Like ChatGPT and DeepSeek Among Employees
AI Tools: The Finance Ministry of India has issued an order banning the use of AI tools and applications, such as ChatGPT and DeepSeek, on official government devices.
AI Tools: The Finance Ministry of India has issued an order banning the use of AI tools and applications, such as ChatGPT and DeepSeek, on official government devices. The circular issued on January 29, 2025 aims to protect sensitive government data from potential cyber threats.
Why did the government ban AI tools?
The order is signed by Joint Secretary Pradeep Kumar Singh and states that AI-based applications may pose security risks to government systems. In view of this, the ministry has advised all employees to avoid using such tools on official devices. This instruction has been sent to major government departments like Revenue, Economic Affairs, Expenditure, Public Enterprises, DIPAM and Financial Services after the approval of the Finance Secretary.
Global trend of restricting AI tools
Security concerns are growing around the world regarding AI tools. Many governments and private companies are limiting the use of AI tools to protect sensitive data. AI models, such as ChatGPT, process user data on external servers, which poses a risk of data leaks and unauthorized access. Many global companies have also banned the use of AI tools so that confidential data can be protected.
US DeepSeek Users Could Face Million-Dollar Fines and Prison Time Under New Law
Will this ban also apply to personal devices?
The government order does not clarify whether employees can use AI tools on their personal devices. However, this move indicates that the government is prioritizing data security while taking a cautious approach towards AI.
It is still uncertain whether the government can formulate a clear policy for AI use in the future. For the time being, Finance Ministry employees will have to rely on traditional methods for their official work.
Main reasons for banning AI tools
Threat of data leak
AI tools like ChatGPT and DeepSeek process user-inputted data on external servers. If government employees enter sensitive data on these tools, it can be stored or accessed, and there is a possibility of misuse. Government departments handle confidential financial data, policy drafts, and internal communications. Even an unintentional data leak can pose a serious security risk.
Lack of control over AI models
The government can control traditional software, but AI tools are cloud-based and owned by private companies. For example, ChatGPT is owned by OpenAI and the government has no way of knowing how it processes and stores data. This can increase the risk of foreign interference and cyber attacks.
Conformity with data protection laws
India is working on strict data privacy laws like the Digital Personal Data Protection (DPDP) Act, 2023. Unregulated use of AI tools can violate data protection policy. This can make government systems vulnerable to cyber threats.
This step of the government has been taken to strengthen the security of government data. However, it is not clear whether any regulated policy will be made for the use of AI tools in the future. At present, Finance Ministry officials have been advised to work in traditional ways, which can ensure the security of sensitive data.
Ensuring Safe Use of AI Tools
For those who choose to use AI applications, it’s essential to follow best practices to protect personal and sensitive information:
- Avoid Sharing Personal Information: Refrain from inputting personal details such as birth dates, addresses, or financial information into AI platforms.
- Protect Financial Data: Do not share banking details, account numbers, or other financial information with AI tools.
- Safeguard Health Information: Avoid discussing personal health details or medical records with AI applications.
Conclusion
The Ministry of Finance’s advisory underscores the government’s commitment to maintaining the confidentiality and security of its data. While AI tools offer numerous benefits, it’s crucial to balance innovation with security. Employees are encouraged to rely on traditional methods for official tasks until clear policies governing AI tool usage are established.
Frequently Asked Questions (FAQs)
Q1: Can government employees use AI tools on their personal devices for official work?
A1: The advisory does not explicitly address personal devices. However, to ensure data security, it’s recommended to avoid using AI tools for official purposes on any device.
Q2: What are the primary risks associated with using AI tools like ChatGPT and DeepSeek?
A2: The main concerns include potential data leaks, unauthorized access, and the lack of control over how external platforms process and store sensitive information.
Q3: Have other countries implemented similar restrictions on AI tools?
A3: Yes, nations such as Australia and Italy have imposed limitations on the use of AI applications like DeepSeek due to data security concerns.
Q4: How can individuals safely use AI tools?
A4: Users should avoid sharing personal, financial, or health-related information with AI platforms to minimize potential risks.
Q5: Will the government develop a policy for AI tool usage in the future?
A5: While there’s no official information currently, it’s possible that the government will formulate guidelines to regulate AI tool usage, ensuring data security and compliance with privacy laws.
(Read the latest news of the country and the world first on TalkAaj (Talk Today News), follow us on Facebook, Twitter, Instagram and YouTube)