3 AI Copilot Rules: Be Selective, Train Users, Check Results AI Assistants for Knowledge Workers3 AI Copilot Rules: Be Selective, Train Users, Check Results AI Assistants for Knowledge Workers
In the Enterprise Connect session “Making the Most of Your AI Assistant,” panelists shared the best practices that have proven most productive.

There are three simple guidelines for getting the most out of AI assistants – and all three rely on a high level of human care and attention. In the Enterprise Connect session “Making the Most of Your AI Assistant,” co-presenters Kevin Kieller, co-founder and lead analyst, EnableUC and Omdia analyst Brent Kelly stressed repeatedly that AI assistant will only be as good as the human judgment exercised in selecting the right assistant for the organization, the human training in getting the most out of the tool, and the human review of AI result. After all, as Kieller said, “AI is really good at finding both the right information and the wrong information.”
Rule #1: Determine Which AI Assistant Works Best for You
A table of the different AI assistant providers published in this study from AI Today provided a framework for the co-presenters’ point that it’s necessary to compare providers – not to see which is best overall, but which one is best for individual companies. Each company has different needs and some providers may be better equipped to handle those needs than others.
Another point to this is the cost of these services. Providers like Zoom, Cisco Webex, and Google don’t charge above the standard licenses for AI assistants, while Slack charges an additional $10 and Copilot is an additional $30. It’s also important to remember that these prices could change in the future.
Rule #2: Double-Check What Your AI Assistant is Saying
After you’ve decided which AI assistant you’re going with, it’s important to remember that AI is still evolving. Double-checking the information AI is giving you is still a crucial part of being an AI user.
Kieller and Kelly provided a few examples of tests they ran with AI assistants. First, they asked Copilot, Google Gemini, and Zoom AI Companion how many ‘r’s are in the word strawberry. The answer: each AI assistant was certain there were only two ‘r’s.
The next day, they asked Copilot again and the AI got it correct, saying there were three ‘r’s. However, after pushing further and asking Copilot if it was sure, it corrected itself to say there were two ‘r’s. Even after telling Copilot that the correct answer was three, Copilot doubled down, saying it was very sure there were only two.
This is an example where it’s easy to see how AI assistants can provide users with the wrong information. We know there are three ‘r’s in strawberry, but the AI assistant was still giving the wrong answer – even after being corrected.
Another example was a test the presenters ran where they asked Copilot to create a math problem that other AI assistants wouldn’t be able to solve. They then asked Copilot and Google Gemini to solve the problem. Both AI assistants showed their work, writing out the steps they took to find their answers. Copilot and Gemini gave different answers, but neither of them was correct -- because the math problem had no solution in the first place. However, Deep Sea was able to produce the correct answer.
These examples show that regardless of the AI assistant, AI is not always correct. It’s easy and even silly to see how incorrect AI was in the presentation’s examples, but real-world cases could be a lot more difficult and detrimental. This is why it’s very important for knowledge workers to make sure AI provides accurate information. These examples also show that AI is probabilistic, meaning that even with the same context via prompt engineering, it can give different answers on different days.
As Kelly said, “This is the challenge when you use AI and don’t verify the work. It’s easy when you can tell how many ‘r’s there are. It’s a lot harder when we’re giving it tougher questions.”
Rule #3: AI Use Is Only As Good as User Training
If AI is probabilistic and incorrect a lot of the time, then what can users do to make sure they are using AI responsibly? According to the co-presenters, the simplest, and seemingly most overlooked answer, is training.
“Training on AI tools really should be mandatory so that your users understand what they could do, what they should do, and what they can do,” said Kieller.
There are multiple studies that show that most people are not trained in AI, even though their company purchases a license to use it. Pew Research released a study last October saying that of the 51% of workers who got extra training in 2024, only 24% said it was related to AI use. General Assembly surveyed 339 VPs and directors in the US and UK and 58% of them said they were never trained on AI. Ivanti reported that 81% of office workers haven’t been trained on AI and 15% are using unsanctioned tools.
Kieller says that AI can make mistakes and people tend to ignore this issue, so it’s important for companies to make sure their workers know how to use AI responsibly, such as not copying and pasting sensitive information into AI tools.
Workers and companies also need to know and understand Retrieval-Augmented Generation (RAG), which allows generative AI models to gain and use information through large language models (LLMs) by taking text from databases, documents, and web sources.
Kelly said, “You’re going to have to have someone who owns the process. When documents get updates, somebody has to own the process to regenerate the RAG database.”
Knowing how AI pulls in data is essential for workers to understand where and how it gets its information so the company’s AI assistant can be up-to-date and accurate.
Where AI Assistants Are Going Next
At the end of the session, Kieller and Kelly went over some of their predictions for the future of AI. Their two big predictions: by the time we get to Enterprise Connect 2026, AI Agents are going to be demoed rather than just storyboarded, and that contact centers will start rolling out AI Agents to knowledge workers. We’ll check back next year to see whether these forecasts come true.
Read more about:
Enterprise ConnectAbout the Author
You May Also Like