AI skeptics aren’t the only ones warning users not to trust model outputs unnecessarily. The AI companies themselves state this in their terms of service.
Let’s take an example from Microsoft. The company is currently focused on getting business customers to pay for Copilot. However, there has also been criticism on social media over Copilot’s terms of service, which appear to have been last updated on October 24, 2025.
“Copilot is intended for entertainment purposes only,” the company warned. “You may make mistakes and it may not work as intended. Do not rely on Copilot for important advice. Use Copilot at your own risk.”
A Microsoft spokesperson told PCMag that the company plans to update what it calls “legacy languages.”
“As the product evolves, that language will no longer reflect how Copilot is currently used and will change in the next update,” the spokesperson said.
Tom’s Hardware pointed out that Microsoft is not the only company using this type of disclaimer regarding AI. For example, OpenAI and xAI both warn users not to rely on their output as “truth” (in xAI’s words) or “sole service of truth or factual information” (OpenAI).
