•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•

Privacy is increasingly becoming a tradable commodity as Google embeds AI across everyday digital tools. Gemini’s privacy options, however, are described as varied and difficult to understand, raising questions about how user data may be used when the AI is integrated into services such as Gmail and Google Drive.
Google’s AI capabilities rely on data, and the company has access to large volumes of user content through Gmail and Google Drive. The central privacy question is what happens when users do not want Gemini to access or use personal information while drafting, summarizing, or processing content.
The article argues that the answer is not straightforward and depends on how users access Gemini features. It also notes that attempts to avoid data collection can lead users into interfaces that may conflict with user preferences while maximizing the developer’s goals.
Google has issued clarifications about how Gemini interacts with Workspace content. The article says Google maintains that emails are not directly fed into Gemini training. Instead, it describes Gemini as accessing data for separate tasks when users request actions such as summarizing a conversation or searching within Drive.
According to a Google spokesperson quoted in the article, “Protecting user privacy and data control is foundational to how we deploy AI in Workspace. Your content remains yours.” The article also states that Google acknowledges AI models may be trained on inputs and outputs from Gemini, even if the company says it does not scan mailboxes to train Gemini.
The article highlights that Gemini outputs can include email summaries, quotes from personal files, and condensed sensitive information. It argues that even if personal data is filtered or minimized during training, users cannot verify how effective those automated processes are.
To reduce exposure, users are reportedly advised to keep Gemini interactions superficial, avoid granting access to other apps, and use temporary chats. The article says the biggest barrier is a Gemini Apps Activity setting that is difficult to interpret.
It states that if users do not want Google to train AI on their activity, they must turn off chat history. This creates a trade-off described as a catch-22: users either lose reference data that could support future interactions or consent to have chat content used by AI.
The article describes the Gemini settings as reflecting a coercive pattern, suggesting the design may push users toward accepting defaults. It also points to Google’s broader strategy of leveraging default placement—such as being the default search option on iPhone—to influence user behavior.
In the AI era, the article argues that Google’s default approach is to share data for AI training, provide AI-generated summaries across email, and use AI to intervene in document creation. It concludes that while AI’s future may be promising, the cost should not be a loss of human autonomy.
Ars Technica. By Anh Phuong.
Premium gym chains are entering a “golden era” that is ending or already in decline, as rising operating costs collide with shifting consumer preferences toward more flexible, community-based ways to exercise. Long-term memberships are shrinking, margins are pressured by higher rents and facility expenses, and competition from smaller, more personalized…