Meta has filed a lawsuit against Joy Timeline HK Limited, the developer of the CrushAI app, which generates fake nude images without consent. The lawsuit stems from the surge of “nudify” apps exploiting social media platforms. Meta claims that Joy Timeline created over 170 business accounts to run ads on Facebook and Instagram, promoting the app with explicit messages encouraging users to upload photos to generate nude images. The tech giant is seeking an injunction to prevent the Hong Kong company from producing such ads on its platforms and is pursuing reimbursement of approximately $289,200 spent on monitoring and removing these advertisements. Meta emphasized its commitment to preventing abuse and protecting its community from non-consensual intimate image sharing. The company is also enhancing technology to detect and block these ads and collaborating with other tech firms to address this issue systematically.
Source link
Addressing Ptacek’s Flawed Perspective on AI — A Response from Ludicity
The author critiques Thomas Ptacek’s article, “My AI Skeptic Friends Are All Nuts,” labeling it poorly reasoned and ethically questionable despite its popularity among esteemed communities. They express frustration at being categorized as anti-AI, emphasizing their nuanced views and the need for serious scepticism about AI’s rapid adoption in programming. The author argues that prior experiences and doubts about AI’s productivity and ethics should not be dismissed. They highlight the dangers of uncritical AI enthusiasm in the tech industry, noting instances where it has led to dire real-world consequences. While acknowledging some AI utility, they question the excitement surrounding its potential, illustrating that many professionals remain cautious. They emphasize that real conversations about AI’s role should center on practicality rather than hype, and they challenge the simplistic dichotomy of pro- or anti-AI perspectives, advocating for a more measured, thoughtful approach.
Source link
Google App Launches Gemini-Powered Live Search Feature with Voice Input Support
Google has introduced “Search Live in AI Mode” as a new experimental feature available in the US. This real-time conversational tool allows users to interact in the Google app similarly to Gemini Live, providing the ability to ask follow-up questions with advanced voice capabilities. Initially unveiled at the I/O 2025 keynote in May, Search Live is powered by a customized version of Gemini. Users can access it via a waveform icon below the search bar on Android and iOS. The feature supports both voice and text responses, enabling users to view transcripts and relevant website links for more information. Additionally, conversations can continue even if the Google app is closed, with a history feature for revisiting past chats. Future updates will include real-time object inquiry using a camera and support for four distinct voices. Currently, it’s part of an opt-in experiment within Search Labs, with no immediate plans for broader regional rollout.
Source link
Interact with Google Search: New Two-Way Voice Chat Feature Powered by AI
Google has introduced a new feature called Search Live in its app, enabling users to chat with Google in real time by speaking their questions. This interactive capability runs on a customized version of Gemini, making conversations feel more natural and less mechanical. Users can engage with Google while continuing to scroll or text, allowing for simultaneous multitasking. To use it, simply tap the new “Live” icon in the app. While the current testing phase is limited to Labs users in the U.S., the feature promises future enhancements, including the ability to use the camera to ask questions about objects in real time. A transcript feature will also be available, allowing users to revisit previous chats. This launch positions Google competitively against platforms like Perplexity AI and ChatGPT Search, reflecting the company’s commitment to improving user interaction through AI technology.
Source link
Examining Bias in Large Language Models | MIT News
Research from MIT has identified a “position bias” in large language models (LLMs), where these models tend to prioritize information found at the beginning and end of documents, neglecting the middle. This bias affects tasks like retrieving phrases from lengthy texts, making it essential for accurate information extraction. By developing a theoretical framework to analyze the attention mechanisms in transformers, MIT researchers discovered that design choices, including attention masking and positional encodings, can exacerbate this bias. Experiments revealed that retrieval accuracy follows a U-shaped pattern, with the best results occurring for answers at the beginning or end of text. The study highlights the need for adjustments in model design, such as different masking techniques and the strategic use of positional encodings, to mitigate position bias in future AI applications. Enhanced understanding of these dynamics could improve the reliability of models in fields like law, medicine, and software development.
Source link
Microsoft’s Withdrawal from Negotiations Puts OpenAI’s Profitability Transition at Risk – TipRanks
Microsoft’s withdrawal from negotiations poses significant challenges for OpenAI’s shift towards a for-profit model. The tech giant has been a crucial partner, providing both substantial funding and technical support, which has been integral to OpenAI’s growth and development of advanced AI technologies. Without Microsoft’s backing, OpenAI risks losing vital investment and operational synergy needed to compete effectively in the rapidly evolving AI landscape. This change could hinder OpenAI’s ability to scale its offerings and realize its commercial potential, impacting its long-term sustainability. Industry experts suggest that OpenAI may need to explore alternative partnerships or fundraising strategies to navigate this potential setback. The dynamics of AI innovation are complex, and maintaining robust partnerships is critical for success in an increasingly competitive market. Overall, Microsoft’s exit could disrupt OpenAI’s planned trajectory and raise uncertainties about its future profitability and innovation capabilities.
Source link
News Organizations Urge Judge to Deny OpenAI’s Request for Data Deletion – Twin Cities
Lawyers for several news outlets, including the New York Daily News and The New York Times, are suing OpenAI, claiming it illegally used their copyrighted material to train its AI models. They have urged a Manhattan judge to maintain an order requiring OpenAI to preserve data that could demonstrate its alleged theft of journalistic work. OpenAI has sought to vacate this order, arguing that retaining the data would be burdensome and infringe user privacy. The news outlets counter that OpenAI’s claims contradict its statements regarding retaining data for legal purposes. They allege that OpenAI’s mass deletions and filtering methods obstruct accountability for copyright infringement. The lawsuit claims OpenAI’s AI products have resulted in the misrepresentation of reporters’ content. OpenAI defends itself by citing “fair use,” arguing its practices comply with legal standards. The judge has previously rejected OpenAI’s assertion that there’s no evidence of users bypassing paywalls.
Source link
Transforming Digital Marketing: The Enterprise AI Ad Suite Innovation
Reddit Community Intelligence is an innovative marketing toolkit designed to harness Reddit’s vast ecosystem, which includes over 100,000 active subreddits. This platform equips brands with deep insights and advanced campaign automation capabilities. By utilizing generative AI, natural language processing, and real-time data analytics, it enables advertisers to effectively monitor trends, sentiment, and relevant discussions in real time. This tool aims to provide actionable intelligence to help brands engage with their target audiences more effectively.
Source link
Google Launches Voice-Enabled Gemini-Powered Search Live
On Wednesday, Google introduced “Search Live,” an AI Mode feature allowing U.S. users to engage in hands-free, spoken interactions with Search. This voice input capability is designed for multitasking, offering AI-generated audio answers and on-screen links for deeper exploration. Users can easily ask questions by tapping the new “Live” icon and receive spoken responses, while seamlessly following up with additional queries. The Background Mode allows simultaneous use of other apps, and users can access transcripts of their conversations or type follow-ups. Powered by a specialized Gemini model, Search Live ensures high-quality and relevant responses via a broad range of web content. Upcoming features include live camera support for enhanced contextual answers. Currently, this feature is available in the Google app for Android and iOS devices in the U.S. to participants of the AI Mode experiment.
Source link