Microsoft used to push its AI services towards its user base, especially with the launch of the Copilot+ PC, but it seems that even the company itself does not trust its creation. According to the Microsoft Copilot Terms of Use, which was updated in October last year, the AI large language model (LLM) is designed for entertainment use only, and users should not use it for important advice. While this may be a boilerplate disclaimer, it’s quite ironic given how hard the company wants people to use Copilot for business uses and has integrated it into Windows 11.
“Copilot is for entertainment purposes only. It can make mistakes, and it may not work as intended,” the document said. “Don’t rely on Copilot for important advice. Use Copilot at your own risk.” This isn’t limited to Copilot, too. Other AI LLMs have similar disclaimers. For example, xAI says “Artificial intelligence is rapidly evolving and is probabilistic in nature; therefore, it may sometimes: a) result in Output that contains “hallucinations,” b) be offensive, c) not accurately reflect real people, places or facts, or d) be objectionable, inappropriate, or otherwise not suitable for your intended purpose.”
Article continues below
While generative AI is a useful tool and can indeed increase productivity, it’s still just a tool that offers no accountability for any mistakes that it might make. Because of this, people who use it must be careful to always doubt its output and double-check its results. But even if you’re aware of the limitations of current AI technology, humans are susceptible to automation bias, wherein we tend to favor the results that machines produce and ignore data that might contradict that. AI could make this phenomenon more severe, especially as it can create results that look plausible or even true with a cursory glance.
Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.

4 hours ago
6








English (US) ·