For those under a rock, here’s a quick TL;DR timeline of DeepSeek’s rise:
With that out of the way, let’s explore what DeepSeek’s disruption means for U.S. companies, particularly around its privacy policies, terms of use, and implications for governance and security. However, for U.S. companies, its privacy policy and terms of use raise questions about data security, compliance, and operational risks. Business leaders must understand these implications to mitigate risks while leveraging such tools effectively.
DeepSeek’s policy involves collecting user inputs such as text, audio, and uploaded files, in addition to technical data like device models, operating systems, IP addresses, and usage patterns. This broad data collection raises questions about how securely this information is managed and whether it might be accessible under foreign data laws, such as those in China. For U.S. companies, the primary concern is where this data is stored and processed, especially if it’s outside the U.S.
Actionable Steps:
1. Confirm Data Storage Locations: Ensure transparency from vendors about where and how data is stored.
2. Perform Compliance Audits: Regularly review data practices to align with international and domestic regulations.
3. Implement Encryption Standards: Use robust encryption for all sensitive data transfers and storage.
DeepSeek reserves the right to use input data, including sensitive business communications or proprietary strategies, for model training and improvement. For example, a marketing agency inputting client strategy data might inadvertently contribute to DeepSeek’s broader AI development—creating unintended exposure of intellectual property and putting the company at a competitive disadvantage.
Actionable Steps:
1. Classify Data: Establish and enforce clear protocols to identify what data is permissible for upload to AI platforms.
2. Secure Contracts: Ensure vendor agreements explicitly restrict the use of proprietary data for training AI models.
3. Use Synthetic Data: Employ anonymized or synthetic data to mitigate risks associated with sharing sensitive information.
DeepSeek retains certain data even after account deletion, as per its policies and applicable laws. This includes data retained for legal compliance or analytical purposes, which could be accessed long after the company stops using the service.
Actionable Steps:
1. Clarify Retention Policies: Request detailed documentation on data retention timelines and practices.
2. Limit Data Uploads: Minimize the amount of sensitive data shared with third-party tools to reduce potential exposure.
3. Request Data Deletion Confirmation: Ensure vendors provide assurances of data deletion upon account closure.
DeepSeek’s policy lacks specificity on whether and how user data is shared with third parties. This ambiguity raises concerns, particularly if data could be accessed by entities governed by foreign laws or used for purposes not explicitly disclosed.
Actionable Steps:
1. Conduct Vendor Risk Assessments: Evaluate the vendor’s data-sharing policies and demand clear disclosures on third-party access.
2. Negotiate Data Protections: Include clauses in contracts to limit unauthorized third-party access to sensitive data.
3. Implement Monitoring Tools: Use tools that track and audit data usage and access in real time.
Employees may use DeepSeek without understanding its implications, bypassing security protocols and creating vulnerabilities. Shadow IT—unsanctioned use of such tools—can result in critical data leaving secured environments, creating significant compliance and security risks.
Actionable Steps:
1. Educate Employees: Train staff on the risks and policies around using unapproved tools.
2. Implement Approved Alternatives: Provide secure, organization-sanctioned AI tools to meet employee needs.
3. Monitor Tool Usage: Use monitoring software to detect and address unauthorized software usage.
The use of DeepSeek for generating content presents both opportunities and risks. While AI tools like DeepSeek can streamline workflows and enhance productivity, they are not immune to errors, biases, or overconfidence in their outputs. Automation bias—the tendency to trust AI-generated content without sufficient scrutiny—can have serious implications, particularly in high-stakes environments.
Actionable Steps:
1. Establish Content Review Protocols: Implement robust review processes to validate the accuracy and fairness of AI-generated materials.
2. Educate Teams on AI Outputs: Train employees to critically assess and verify AI-generated content, ensuring that human oversight complements AI capabilities.
3. Promote Transparency: Clearly communicate when AI tools like DeepSeek are used in content creation to build trust and manage expectations effectively.
The action points outlined in this article are not just responses to the specific challenges posed by DeepSeek—they are principles we should be considering every day as we evaluate and improve our businesses. These are exactly the considerations we address when building and implementing comprehensive cybersecurity strategies through NextLink Labs’ Cybersecurity Strategy and Implementation Plans (CSIP).
For organizations ready to take the next step, our CSIP framework can guide you through every phase of developing or enhancing your cybersecurity program. For those who aren’t at that point yet, the action points provided here are a great starting place to proactively address the challenges posed by DeepSeek and to strengthen your overall cybersecurity posture.