Organizations continue to make significant investments in Enterprise AI, bringing Agentic and Generative AI solutions into their own operations and systems, with the goals of improving their operations through automation and machine learning. Within the context of knowledge, data, and information management, we’ve seen organizations make meaningful strides with AI, implementing solutions that combine the collective knowledge assets of the organization to deliver actionable intelligence at the point of need for their employees, automatically identifying and proactively filling gaps in knowledge and information, and improving the quality and reliability of their vast knowledge assets across the enterprise.
Though most organizations have thus far struggled to achieve true enterprise-level artificial intelligence capabilities, many have realized varying levels of success, with some moving beyond prototypes and implementing and scaling pilots to production. For these organizations that have begun to realize the value of AI in production, however, we’re now noting several unintended consequences emerging. Not all of these consequences are negative or inherently risky for the organization, but they do all bear consideration for organizations at any stage of an AI initiative and will require thoughtful planning and design in order to leverage or mitigate.

Resurfacing of Legacy Information
We know that most organizations are maintaining at least five times the information they should be (or need to be). Organizations are overrun with old, obsolete, duplicate, and near-duplicate information. This issue must be overcome in order to make AI work reliably. It is also an issue AI can help to solve. Even with mighty efforts from organizations that have invested in AI-ready knowledge assets, we’re seeing legacy information surfaced. In some cases, this has been for the positive. For example, there have been cases where old ideas or ways of working have resurfaced and created opportunities to revisit ideas for which the organization is now culturally or technologically ready. The more common case, however, is that information that should have been archived has been resurfaced, causing confusion, disruption, and embarrassment for the organization. This can be particularly problematic for AI solutions that “white label” information, making it look fresh and new rather than offering the user the visual cues that come from finding an old document (like expired branding, old fonts, or more simply, dates and authors). A more severe case of this is when sensitive information that should have been secured is ingested by AI and exposed to the wrong people.
Opacity of AI Platforms and Algorithms
One of the most compelling aspects of generic AI models is how enterprise platforms use organizational knowledge to train their algorithms. When done well, this delivers real value—faster decisions, smarter solutions, and tools that better reflect how the organization actually operates. However, this benefit comes with a tradeoff that deserves its own risk category: embedded vendor AI models within enterprise platforms. These models are increasingly “always on,” continuously learning and deeply integrated into core systems. In the process, they may absorb sensitive information they were never meant to access (think trade secrets, proprietary IP, or regulated data like PII). The result is a loss of control over our most valuable assets, combined with limited visibility into how models are trained, where data flows, and how decisions are made. Over time, this opacity can lead to vendor lock-in, compliance gaps, and unintended data exposure similar to that mentioned in the previous point.
Inadvertent Plagiarism
AI solutions are, by design, scraping an organization’s existing knowledge assets, ranging from structured data to unstructured content, pulling information from existing sources, and combining it into—ideally—Relevant, Organizationally Contextualized, Complete, and Knowledge-Centric (ROCK) answers. For many organizations, included in the complete corpus of indexed information is a host of third-party, copyrighted information, as well as official documentation and records that also require citation. AI solutions are using all of this and delivering it, at times, without the appropriate references. When kept entirely within the organization’s firewall, this is only a limited concern, but as materials move outside of the organization, this becomes a major issue that can expose an organization to undue risk and reputational damage. Even internal to an organization and aside from the aforementioned issues of plagiarism, losing the knowledge of the source is creating misdirection within organizations, as the consumers of the AI results are unable to judge the dependability of the information based on the author or source.
Decreasing Innovation and Creativity
Within the knowledge management field, one of the North Star goals has always been to ensure organizational knowledge is captured for others to be able to leverage. This means harnessing lessons learned and expertise to ensure an organization is operating efficiently and effectively, powered by its past learnings and experiences. With the latest AI solutions, organizations are getting closer than ever before to achieving that goal across the enterprise. We’re now hearing growing worries about the impact of that capability, with organizational leadership concerned about losing their innovative and creative edge with AI giving their employees the “easy” answer rather than having them work to figure it out. This has become such a concern, in fact, that the pithy moniker of ‘AI Workslop’ has now been added to corporate vernacular. Put simply, organizations are worried their people are losing the ability to think creatively and critically. What happens when the organization successfully delivers guidance on the gold standard way of doing a task to all its employees? A lot of productivity and consistency can be gained, but has the organization just factored out the creative process that could have revealed a new and better way of working?
Less ‘On The Job’ Development
One of the promised outcomes of AI is automation of mundane tasks and improved efficiencies for many tasks that have traditionally been done by new joiners and employees new to the workforce. This has been particularly noteworthy for services and consulting firms, with companies including Accenture, McKinsey, Amazon, Microsoft, and Salesforce announcing layoffs or slowed hiring, citing AI as a driving factor. These changes can have a massive impact on the bottom line, but they are also changing the very structure of these and other organizations, shifting from a “pyramid” organizational structure to more of a “column,” if you will, with fewer entry-level employees at the base. The initial savings are apparent, but how does this impact how people learn and grow within the organization? Some of the most successful use cases we have seen for AI have been in onboarding, delivering targeted learning at the point of need to new joiners, and proactively giving guidance on how to correctly perform tasks. Historically, organizations have lauded the ideas of “rising stars,” using entry-level positions as an opportunity to spot talent, invest in it, and craft opportunities for mentoring and on-the-job learning. What happens if AI is successful in eliminating the people who will be the future leaders of the organization? Even if not fully eliminated, many employees’ early years in the organization are spent “learning how to learn,” figuring out tasks, building networks of experts from whom to learn, and even making small mistakes on internal initiatives that help them develop their own expertise. With the latest initiatives in Enterprise AI, organizations may be washing away the factors that help to develop their own resources.
New Silos
One of the core roles of KM has always been to break down organizational silos, identifying means for the organization to share knowledge with business context across systems, geographic areas, and functions. IT solutions have often been established to support that mission, but they often ended up doing the opposite. How many organizations have suffered from multiple intranets, portal solutions, or search solutions? Initially intended to break down silos, too many findability and discoverability tools are instead reinforcing them, creating situations where end users have to go from system to system to obtain a complete answer. The best AI solutions, truly enterprise-level, finally address that issue, combining all types, forms, and sources of knowledge assets with context in order to create a single point of interaction for end users. This drastically reduces the friction most employees experience when seeking a complete answer to a question. However, what we’re unfortunately seeing is that many different AI solutions are springing up within an organization, some spawned by departmental efforts, some borne out of the native AI solutions within existing system repositories, and some created by curious technologists wishing to “play with” the latest technologies. Though these individual solutions hold value and are “working” by any definition, they’re in fact replicating the classic silos they should be eradicating.
Organizational Flattening
With AI leveraging plain language and negating the need for any advanced querying or operational skills, as well as its ability to generate “polished” reports pulling from multiple sources, executives and other senior leadership now have the long-promised ability to receive business intelligence faster, easier, and more independently. This has been a goal for years, with portals and metrics dashboards consistently failing to meet the need as they lacked the flexibility to answer the varying core questions at scale. Instead, in many organizations, senior leadership obtains core business data via human-generated reports and presentations. Typically, an executive has a series of questions, so they request the information from their direct reports, who then go to others to request it, who then harvest it from a series of systems and sources, harmonize it, add their analysis, and deliver it up to the chain, where it’s further reviewed and edited, amended with additional insights, and then delivered to the original requester. Entire careers have been spent mapping this cycle and seeking opportunities to improve efficiencies. It takes time, but having multiple hands involved introduces opportunities for expert knowledge and insights to be added (of course, this also runs the risk of having biases and incorrect assumptions added). With AI, this entire process is vanishing. In the most mature AI-powered organizations, leadership can cut these layers in the process and, near-instantaneously, receive the answers to their questions by using natural language questions. This vastly improves speed and efficiency, and in many cases, accuracy and completeness, but the addition of expert knowledge from the human authors is lost, meaning the facts may be delivered to decision makers without the business context that ensures a full understanding.
As I conclude, I’ll reiterate that not all unintended consequences are negative. The sections above cover a number of opportunities that can be leveraged for even greater business value from Enterprise AI initiatives. There are, however, risks as well, and for those, the fields of knowledge management, organizational design, and semantics offer many of the solutions. Back in 2024, we introduced the concept of Knowledge Intelligence (KI) to cover these concepts. Since then, our work has proven over and over how the thoughtful inclusion of KM practices, semantic design, and use of semantic layer frameworks can vastly improve the odds of adoption and success for AI, as well as the long-term value it delivers. If your organization is at any stage of its AI journey, connect with us now to ensure you get the results you’re seeking and realize the added benefits that are available.
