SOURCE: KAY Sever | February 25, 2026

After being on an AI learning curve for the past several months, I have decided it’s time to write about my observations, insights and possible ways to selectively apply AI within the optimization framework to create a better result. Therefore, our topic for 2026 will be the intersecting points of Optimization and AI as they relate to productive assets, the organization and the management system. We are told that AI will “know all”, so we expect to find it helpful no matter the question asked or problem to be solved. The more you can learn about AI models (LLMs: large language models) and their strengths and weaknesses, the clearer your vision will be about incorporating an AI model into your workflow.
Last month I ended my article with this month’s topic: Are our expectations for how AI can be used to increase profit valid? Given what I have learned the past 30 days, I am modifying our topic for March. Instead, we will focus on AI capabilities, rate of change, and limiting factors.
AI Capabilities
The most popular LLMs in use today are not created equal. They have strengths and weaknesses linked to their design and function. LLM differences in capabilities arise as designers build more capability into LLMs and discover more ways to link LLMs to science, business and daily life.
- Some were designed to replace Wikipedia, dictionaries and encyclopedias.
- Some can analyze sentence structure and improve how paragraphs are written.
- Some can do calculations and mathematical analysis.
- Some can create videos from images.
- Some can verbally “converse” with a user when brainstorming.
- Some can organize information, calendars, etc. for a team of people.
- Some can collaborate with other LLMs to determine the “best” answer for a user, given the capabilities of the LLMs that are talking to each other.
- Some were designed to analyze specific “functional” datasets (i.e., medical, legal, financial, etc.)
Moving Targets
In the current “AI race”, getting a grasp on AI capabilities is like chasing moving targets! The speed at which LLM capability is changing is faster than any system development I have witnessed in my lifetime. During my career, I have personally designed and implemented many systems for large mining/processing operations… systems that tracked and reported mobile/underground equipment productivity, maintenance, haul truck tire life, budget/forecasting, training, continuous improvement initiatives, and optimum performance. I am very familiar with the process for changing existing systems and building new computer modules. Each module or system change involves understanding and documenting user requirements, then programming the changes, testing the changes, designing training materials and scheduling the training. All of this work takes weeks or months to complete.
Why am I bringing this up? Imagine a world where all car manufactures introduced new features every week. Selecting the best car for you would be difficult… it would feel like you were chasing a moving target. AI development is currently like that. Right now it is IMPOSSIBLE to choose an LLM whose capabilities will remain static over a period of time (like a traditional software package that does not change for months or years). Strengths (and weaknesses) change weekly as designers build more capability into LLMs and discover more ways to link LLMs to science, business and daily life. A management team or an individual user could select an LLM based on its strengths this week, but next week another LLM may be the best match for their needs.
Limiting Factors
- Data Access: AI cannot perform any function or work without data/information. This may sound too simplistic to state; however, data/information accessible by AI determines the quality of output, what analyses can be performed, what problems AI can help solve, and if the output from AI can be trusted.
An AI accuracy test: We have been told that all content from printed material (books, magazines, newspapers, world history) and material from the internet (articles, images, movies, videos) have all been downloaded into AI. To test this theory, I asked AI about myself.
- Over the last 25 years, I have written a large body of work… 15 years of monthly articles about management’s challenges with change, a book about optimization and culture change, and LinkedIn articles/posts about management topics since 2006. I have also spoken at over 25 industry conferences.
- In December I asked one of the highly-ranked LLMs who I was. It returned one sentence that said I was an optimization consultant and coach in the mining business. There was no mention of books, articles or speaking engagements.
- Then I asked AI if I had written a book. It said “NO”.
- I typed that I had written a book and gave the title and year of publication and hit return. AI sat “Thinking” for a few seconds, and then it said I did write a book and repeated the title and date of publication. Then it typed “Thanks for the help!”
Why did I share this with you? I discovered that AI does not have access to all the materials I thought it did. It only knows what has been shared with it, which means you can ask it a question and believe you are getting a correct answer because you think it has access to the world’s body of knowledge. You will not know if the answer it gives you is true, complete, false or misleading. I am not telling you to avoid using AI, but you may want to understand more about an LLM’s access to information before assuming that every output given to a user is accurate. You might want to ask more than one LLM the same question and evaluate the answers you receive to gain a deeper perspective of which LLM you rely on.
- Power Sources: You may be aware that there are huge issues with power required to run AI data centers. The estimates for power requirements are updated frequently and the numbers always go up, not down. Many communities have voted down AI data center construction because power costs for residents will go sky high if a data center uses most of their power. This bottleneck in power sourcing is causing huge delays in data center construction. Solutions for alternative sources of power are being considered by tech companies to solve the problem.
- Chip Availability: Because AI capabilities are changing so fast, the chips required for processing are becoming obsolete very quickly. This dynamic makes it difficult for chip manufacturers to meet the continually changing specs for chips.
This is not a complete list of limiting factors. As things change and unfold over the next few months, I may address others.
In summary, LLM capabilities are constantly changing, which makes it difficult to select one to incorporate into workflows. What we believe every AI “knows” or has access to may not be correct, which makes it difficult to be certain that AI output is true. A lack of power sources are slowing down the rapid expansion of data centers across the US. The specifications for chips are changing rapidly as LLM capabilities are increased and chip manufacturers are having difficulty keeping up with the speed of change.
Next month: Exploring Problem-Solving with AI
Kay Sever is an Expert on Achieving “Best Possible” Results. Kay helps executive and management teams tap their hidden profit potential and reach their optimization goals. Kay has developed a LIVESTREAM management training/coaching system for Optimization Management called MiningOpportunity – NO TRAVEL REQUIRED. See MiningOpportunity.com for her contact information and training information.
To comment on this story or for additional details click on related button above.
- About Us
Kay has worked side by side with corporate and production sites in a management/leadership/consulting role for 35+ years. She helps management teams improve performance, profit, culture and change, but does it in a way that connects people and the corporate culture to their hidden potential. Kay helps companies move “beyond improvement” to a state of “sustained optimization”. With her guidance and the MiningOpportunity system, management teams can measure the losses caused by weaknesses in their current culture, shift to a Loss Reduction Culture to reduce the losses, and “manage” the gains from the new culture as a second income stream.

