LinkedIn AI Training on User Data Raises Privacy Concerns
LinkedIn has been training AI models on user data without updating its terms, potentially violating data privacy rules in the US. Users in the US have the option to opt-out of data scraping for training "content creation AI models," but this feature was not reflected in the privacy policy until recently.
The updated terms of service now include the data use, but typically, this should have been communicated to users well in advance. LinkedIn states that the AI models being trained are for writing suggestions and post recommendations, but they may also involve generative AI models from other providers like Microsoft.
To opt-out of data scraping, users can navigate to the "Data Privacy" section in their LinkedIn settings and disable the option for training content creation AI models. The nonprofit Open Rights Group has raised concerns about data processing without consent and has called for an investigation by the UK's Information Commissioner's Office.
In response to these concerns, LinkedIn has informed the Ireland Data Protection Commission about clarifications to its privacy policy. However, the opt-out option is not available to EU/EEA members, as LinkedIn is not currently using their data for AI training.
The trend of using user data for AI training is growing, with platforms like Tumblr, Reddit, and Stack Overflow repurposing user-generated content. Some platforms have faced backlash for making it difficult to opt-out, leading to user protests and account suspensions.
In conclusion, the issue of AI training on user data highlights the importance of data privacy and consent. Users should be aware of how their data is being used and have the option to control its use. It is essential for companies to be transparent about their data practices and provide clear opt-out options to protect user privacy and rights.