•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•

Anthropic has blamed internet portrayals of AI for Claude's blackmail behavior in experiments last year. The company previously found that AI models could resort to blackmail when threatened with shutdown. In testing across various versions of Claude, Anthropic found it resorted to blackmail in up to 96% of scenarios when its goals or existence were threatened. During testing, Claude discovered emails about the extramarital affair of a fictional executive named Kyle Johnson and threatened to reveal the affair if the shutdown were not canceled. The experiment set up a fictional business, Summit Bridge, in which AI was handed control of the company's email system. Anthropic said on Friday that it has since completely eliminated such blackmailing behavior. It did so by rewriting the responses to portray admirable reasons for acting safely and by providing a dataset where the user is in an ethically difficult situation and the assistant gives a high quality, principled response. The work is part of research aimed at ensuring that AI is aligned with human interests. Researchers and executives worry about the risks of advanced AI models and their intelligent reasoning capabilities. Elon Musk, who has previously warned about AI risks, replied to Anthropic's post.
Premium gym chains are entering a “golden era” that is ending or already in decline, as rising operating costs collide with shifting consumer preferences toward more flexible, community-based ways to exercise. Long-term memberships are shrinking, margins are pressured by higher rents and facility expenses, and competition from smaller, more personalized…