DeepSek AI has drawn controversy after reports suggested it used Google’s Gemini model to train its AI, a practice known as distillation. This has raised ethical concerns, especially as OpenAI prohibits such usage and DeepSek had previously faced accusations of using ChatGPT outputs in its training. Tech observers, including SpeechMap developers, voiced suspicions due to similarities in response patterns between DeepSek’s model and Gemini. Despite ethical implications, experts recognize the strategic reasoning behind DeepSek’s actions, given the geopolitical landscape and limited resources due to US-China tech tensions. Nathan Lambert from AI2 noted that using synthetic data from leading model APIs could provide DeepSek with indirect computational advantages. The situation highlights a regulatory gap in the rapidly evolving AI industry, where efficiency often conflicts with ethical standards and intellectual property rights. The implications of these practices raise questions about potential new norms and the need for stricter regulations in AI development.
Source link
DeepSek’s Alleged Use of Google Gemini Data to Train AI Sparks Ethical Controversy

Leave a Comment
Leave a Comment