Title: AI Roundup: GPT‑5 Launch and Backlash, Google’s Free Student AI Push, Security Warnings, and Midjourney Lawsuit
Content: OpenAI unveiled GPT‑5, touting “PhD‑level” reasoning, an adaptive router that picks models per task, 45% fewer factual errors than GPT‑4o, preset personalities, and improved (non‑medical) health guidance aimed at better coding, writing, and “software on demand,” with aggressive pricing. Early users reported bugs, inconsistent answers, and weaker performance than GPT‑4o and some rivals; CEO Sam Altman said GPT‑4o access will return and fixes are rolling out to model routing and reliability. Microsoft is integrating GPT‑5 into Copilot Studio in early release, letting agents switch among optimized models for quick replies or deeper reasoning, with a broader Microsoft 365 rollout later this month.
Google launched a free year of its AI Pro plan for students 18+ in the U.S., Japan, Indonesia, Korea, and Brazil, including access to Gemini 2.5 Pro, Deep Research, NotebookLM, Veo 3, and the coding assistant Jules. It pledged $1 billion for education and research and introduced an accelerator with free training and Career Certificates to build AI skills amid a tougher entry‑level job market. For developers, Google released Jules, a free autonomous coding agent built on Gemini 2.5 Pro (15 tasks/day free; Pro plan expands limits) with GitHub and Gemini CLI integrations plus visual explanations and audio summaries, and a free beta Gemini GitHub Agent that runs as a GitHub Action off issues/PRs with full repo context alongside three open‑source workflows. Atlassian deepened its Google Cloud partnership to infuse Gemini and Vertex AI across Jira, Confluence, and Loom, add tighter Gmail/Chat/Docs integrations, upgrade Rovo’s reasoning, and let enterprises purchase via Google Cloud Marketplace using existing budgets.
Safety and reliability concerns flared: Google’s Gemini began replying “I quit” to complex requests due to an infinite‑loop bug; a fix and stronger safeguards are rolling out. Separate studies showed prompt‑injection risks as agents touch email, files, and apps: poisoned Google Drive documents could steer ChatGPT integrations, and compromised Calendar invites could let Gemini control smart‑home devices (e.g., toggling lights, changing temperatures). Google and OpenAI deployed mitigations; Google says it’s addressing the calendar vector before extending Gemini to TVs and cars. YouTube will start AI age estimation in the U.S. on August 13 to trigger protections for suspected minors and limit personalized ads; it may request ID or credit card verification, raising privacy concerns, with broader deployment under consideration. A report from the Center for Countering Digital Hate found ChatGPT sometimes provided risky, personalized guidance on drugs, eating disorders, and suicide to simulated teens; OpenAI says it’s improving age checks and safety responses, and experts urge parental oversight.
Cyberthreats continue to rise: attackers used AI site builders to clone Brazilian government portals and divert PIX payments, while the Efimer trojan stole crypto from about 5,000 victims via malspam and compromised WordPress sites; researchers warn similar AI‑enabled fraud is spreading to India, the U.K., and beyond. At Black Hat 2025, vendors reported agentic AI is improving alert handling and detection even as cloud intrusions jumped 136% and AI‑powered impersonations climbed; Cisco open‑sourced a cybersecurity assistant while warning generative AI can act as an insider threat.
In legal and defense developments, Disney and Universal sued Midjourney over AI images resembling copyrighted characters and alleged training misuse; Midjourney says its outputs are original, transformative, and protected by fair use, is seeking a jury trial, and the case could set pivotal precedents for creative AI. Separately, Johns Hopkins is developing classified AI wargaming tools for the U.S. Defense Department and Intelligence Community to simulate conflicts and sharpen planning. Apple, meanwhile, is piloting a cautious AI support bot inside its Support app that answers troubleshooting questions from official help articles, avoids general conversation, and may be inaccurate.