Microsoft CEO Satya Nadella speaks at the company’s Ignite Spotlight event in Seoul on November 15, 2022.
SeongJoon Cho | Bloomberg | Getty Images
Thanks to recent advances in artificial intelligence, new tools like ChatGPT are surprising consumers with their ability to create compelling writing based on people’s questions and prompts.
While these AI-powered tools have gotten much better at producing creative and sometimes humorous responses, they often contain incorrect information.
For example, in February when Microsoft debuted its Bing chat tool, built with the GPT-4 technology created by Microsoft-backed OpenAI, people noticed that the tool was gives the wrong answer during a demo related to financial performance reports. Like other AI language tools, including similar software from Googlecan the Bing chat feature sometimes present false facts which users may believe is the ground truth, a phenomenon scientists call a “hallucination”.
These problems with facts have not slowed the AI race between the two tech giants.
On Tuesday, Google notified it brought AI-powered chat technology to Gmail and Google Docs, allowing it to assist in writing emails or documents. On Thursday, Microsoft said its popular business apps like Word and Excel will soon ship with ChatGPT-like technology dubbed Copilot.
But this time, Microsoft says the technology is “usefully flawed.”
In an online presentation about the new Copilot features, Microsoft executives addressed the software’s tendency to give incorrect answers, but mentioned it as something that could be useful. As long as people realize that Copilot’s answers can be sloppy with facts, they can edit out the inaccuracies and more quickly send their emails or finish their presentation slides.
For example, if a person wants to create an email wishing a family member a happy birthday, Copilot can still be helpful even if it shows the wrong date of birth. In Microsoft’s view, the mere fact that the tool generated text saved a person some time and is therefore useful. People just need to be extra careful and make sure the text doesn’t contain any errors.
Scientists may disagree.
Some technologists like Noah Giansiracusa and Gary Marcus have expressed concern that people can rely too much on today’s AI and are embracing advice tools like ChatGPT that exist when asking questions about health, finances and other important topics.
“ChatGPT’s toxicity safeguards can be easily bypassed by those who want to use it for evil and as we saw earlier this week, all new search engines continue to hallucinate“, the two wrote in a recent Time op-ed. “But once we get past the Opening Day jitters, what will really count is whether any of the big players can building artificial intelligence that we can really trust.”
It is unclear how reliable Copilot will be in practice.
Microsoft chief scientist and engineering colleague Jaime Teevan said that when Copilot “gets wrong or is biased or abused,” Microsoft has “limitations in place.” Additionally, Microsoft will test the software with only 20 enterprise customers initially so it can discover how it works in the real world, she explained.
“We’re going to make mistakes, but when we do, we’re going to fix them quickly,” Teevan said.
The business stakes are too high for Microsoft to ignore the enthusiasm for generative AI technologies like ChatGPT. The challenge will be for the company to incorporate that technology so that it doesn’t create public distrust of the software or lead to major public relations disasters.
“I studied AI for decades and I feel this tremendous sense of responsibility with this powerful new tool,” Teevan said. “We have a responsibility to get it into people’s hands and to do it the right way.”
