AI Touchpoints – Corporate Risk, “A Chicken or the Egg” Dilemma

SOURCE: Kay Sever | April 29, 2026

 

After being on an AI learning curve for the past several months, I have decided it’s time to write about my observations, insights and possible ways to selectively apply AI within the optimization framework to create a better result. Therefore, our topic for 2026 will be the intersecting points of Optimization and AI as they relate to productive assets, the organization and the management system. 

Management’s greatest challenge at present may be trying to critically examine a new technology that changes nearly every day, then decide if, when and where to integrate this technology into existing work flows and practices (operational, organizational, executive/management decision making) so the end result is better than before (higher production, lower cost, more profit). In this article, last month’s key points are summarized with areas of risk identified so executives can use this article as a resource tool. 

AI INTEGRATION: Employees, Data Sharing (yours) and Data Sources (AIs). These elements of AI Adoption/Integration are intertwined. I have included some questions management needs to ask, consider and agree upon before moving forward with an AI implementation.   

  • Employees: There are two issues to consider as you move forward with AI.       
    • What do you want AI to do to enhance current employee productivity/capabilities? The answer to this question will be job-function specific and somewhat depends on the sophistication of your current processes/information systems in place. Which employees could most benefit from AI integration? Thousands of work processes exist in many companies, so it may be best to take an AI integration approach with defined phases.         
    • RISK: Employees must know how to “ask” AI for information or analyses to receive responses that can be used for decision-making. Employees must be trained to write AI queries that yield comprehensive responses. This is a critical point because the quality of an answer is directly dependent on the specificity and inclusiveness of the question.   
  • Data Sharing: There are three areas of risk to be aware of when linking existing systems to AI.  
  • RISK: What data are you comfortable sharing with AI? Currently your internal data/IP is only shared externally to meet financial and regulatory reporting requirements. If you incorporate AI into work processes, you must grant AI access to your existing files (operational, organizational, financial, etc.) As of this writing, when AI gains access to your data files, they are no longer confidential and can be shared externally by AI. A fix for this problem is being explored by AI companies but no solution yet exists.  
  • RISK: As of this date, if you ask AI to refine or summarize your company reports OR refine an article or report your people have written, AI now owns that article – you or your company no longer own it. Further, AI agents can share that information with other AI agents or anyone without your permission or knowledge.
  • RISK: Some IT systems harder to link to AI than others. As of this writing, if you have older IT systems in place, linking them to AI may be difficult because of the system structure and/or programming languages used to write them. As of this writing, companies with older IT systems are learning that making the link is difficult or not possible at this time.         
  • Data Sources: There are two issues to be aware of when using AI to improve employee productivity or information for decision-making.   
    • RISK: The content/context of AI responses come from “knowledge” stored in data centers, but that information is not as broad as you might think! It includes the content from the “most popular” websites, articles in SOME professional journals and industry newsletters (because publishers prohibit open sharing), and SOME books. (See last month’s article for a detailed list by source type). As a result, AI may give responses that omit critical information (latest technologies, discoveries or historical information). This gap in AI “knowledge” will improve over time.
    • RISK: If you want to replace job functions performed by employees with AI “agents” that do analyses, there are important things to know before you begin eliminating people. You cannot assume that AI will access the same sources of data or information that a trained employee would use to perform a task. Key information could be left out of an analysis without your knowledge. You will not know what sources were used by AI. 

IMPORTANT NOTE: See the last half of April’s article for four questions I asked AI about accuracy. AI’s answers expose our inability to confirm the accuracy of AI responses.            

SUMMARY: I believe we are in Phase One of a massive AI project that we have little control over. As of this writing, this is how I see it:

In Phase One the public is “getting their feet wet” by asking AI questions to see what kind of responses they get. Like searches on the internet, AI search topics include legal issues, science/technology, health, money/investments, etc.  

While we are busy learning how to use AI for searches, simultaneously in the background AI data engineers are busy writing code that executes commands that go beyond simple searches. This code performs actual work/tasks. These engineers are discovering things daily as they play with the technology and find out what it can and cannot do. In some cases, they are discovering that AI can program itself. 

In my opinion, the people developing the AI LLMs are downplaying AI limitations and are magnifying/exaggerating AI strengths. For example, we learned from AI that incomplete or incorrect data can appear in responses without our knowledge. Now we know that accuracy and depth of knowledge needs to improve before AI can be widely used with confidence in the workplace. This kind of information can help us ask different questions before we make a decision about AI integration. This past week I learned that linking AI to corporate IT systems is much harder if the corporate systems are older. This limitation could dramatically impact timelines for AI integration and these timelines will vary by company… important information for managing expectations.   

In short, this constantly changing environment makes it extremely difficult for executives to read the landscape and decide when to bring AI in, what it will really be able to do for a company (short-term and long-term), and when to reduce manpower if a reduction in force is part of the company’s plan.  

A “Chicken or the Egg” Dilemma  

We have all heard the saying “Which came first – the chicken or the egg?” I see this as a “chicken or the egg” moment for executives in every established company. I am suggesting this analogy because of what I have learned about AI capability in the past six months.

Executives are feeling pressure to integrate AI into their work processes ASAP to improve productivity and reduce labor costs. The problem here is the sequence, not the objective. AI technology needs to improve. AI interface timelines will be unique to each company. The pressure to reduce manpower BEFORE processes/systems are modified and tested is the problem. To say it another way, executives are being pressured to trade the “promises of AI” for “known years of training and experience” they will forfeit in a layoff. This is especially important in the mining industry where the industry’s IP is held by a relatively small population of experienced employees. Should the “trade” (the chicken) come before the benefits are proven (the egg)?      

Next month’s topic will depend on what I learn about AI in the next 30 da

Kay Sever is an Expert on Achieving “Best Possible” Results. Kay helps executive and management teams tap their hidden profit potential and reach their optimization goals. Kay has developed a LIVESTREAM management training/coaching system for Optimization Management called MiningOpportunity – NO TRAVEL REQUIRED. See MiningOpportunity.com for her contact information and training information.

To comment on this story or for additional details click on related button above.

Kay Sever Author
P.O. Box 337 Gilbert, AZ USA 85299-0337

Kay has worked side by side with corporate and production sites in a management/leadership/consulting role for 35+ years. She helps management teams improve performance, profit, culture and change, but does it in a way that connects people and the corporate culture to their hidden potential. Kay helps companies move “beyond improvement” to a state of “sustained optimization”. With her guidance and the MiningOpportunity system, management teams can measure the losses caused by weaknesses in their current culture, shift to a Loss Reduction Culture to reduce the losses, and “manage” the gains from the new culture as a second income stream.