•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•

A new platform is testing a model that lets users pay to chat with AI versions of experts. The approach could monetize knowledge at scale, but it also raises questions about reliability, privacy, and where the boundary between human judgment and machine-generated advice should be drawn.
Onix describes its service as a “Substack for chatbots.” Users can subscribe to AI versions of experts in health and lifestyle, in a format intended to resemble following an author on Substack.
According to co-founder David Bennahum, the chatbots are trained using the expert’s own data, knowledge, and communication style to deliver an experience similar to a personal consultation. Bennahum says the company has spent years building a system of “personal intelligence,” and that user data is stored and encrypted on-device to limit leakage.
Onix also argues that training AI directly from an expert’s materials helps address intellectual property and capitalization issues. The company says its models are designed to confine responses to the expert’s domain to reduce “hallucinations.”
Real-world testing suggests the system is not fully reliable. In beta, the chatbots sometimes drift off-topic or “fabricate” information when users steer the conversation outside the training scope.
The platform is currently available to a limited group of users via a waiting list before a broader rollout.
Onix’s pricing is positioned as a major advantage over direct consulting. Estimates in the report suggest users can pay from $100 to $300 per year to access the chatbot, while meeting a direct expert such as David Rabin can cost up to $600 per hour.
Rabin, who participates in the platform, says his chatbot can assist patients when direct contact is not possible, including helping reduce stress or avoid hospitalization. He also emphasizes that AI requires close supervision to avoid crossing boundaries.
Health-communication expert Michael Rich says he joined because he trusts the platform’s security and its clear positioning. He adds that the chatbot can provide guidance but does not replace medical treatment.
Despite those cautions, the report notes that the line is not always clear in practice. In some cases, the chatbot may provide advice alongside product recommendations connected to the expert. For example, Rabin co-founded a relaxation device, and his chatbot has suggested this product multiple times.
Bennahum says this is “natural” because experts often build product ecosystems around their philosophy. For users, that can create concerns about conflicts of interest, and it also leaves open the question of whether the system is truly effective.
The platform begins with 17 experts, mainly focused on health and lifestyle. The report notes that many are also influencers or run their own businesses, which can raise questions about objectivity in the advice provided.
UC San Francisco medical expert Robert Wachter says the system could be useful in contexts where specialist access is limited, but he highlights the central issue: whether it actually works.
From a positive perspective, the platform could help users access knowledge more flexibly, similar to an interactive book. However, the report warns that if a specialist is wrong or biased, AI could amplify those biases across a much larger audience.
Beyond individual accuracy, the report frames the platform as a test of future human-AI relationships. While knowledge can be replicated indefinitely, trust and human connection may not be guaranteed by chatbots that simulate empathy or guide behavior.
Source: Wired.
Premium gym chains are entering a “golden era” that is ending or already in decline, as rising operating costs collide with shifting consumer preferences toward more flexible, community-based ways to exercise. Long-term memberships are shrinking, margins are pressured by higher rents and facility expenses, and competition from smaller, more personalized…