In an age where organizations are seeking competitive advantages from new technologies, having high-quality knowledge readily available for use by both humans and AI solutions is an imperative. Organizations are making large investments in deploying AI. However, many are turning to knowledge and data management principles for support because their initial artificial intelligence (AI) implementations have not produced the ROI nor the impact that they expected.
Indeed, effective AI solutions, much like other technologies, require quality inputs. AI needs data embedded with rich context derived from an organization’s institutional knowledge. Institutional knowledge is the collection of experiences, skills, and knowledge resources that are available to an organization. It includes the insights, best practices, know-how, know-why, and know-who that enable teams to perform. It not only resides in documentation, but it can be part of processes, and it lives in people’s heads. Extracting this institutional knowledge and injecting it into data and content being fed to technology systems is key to achieving Knowledge Intelligence (KI). One of the biggest gaps that we have observed is that this rich contextual knowledge is missing or inaccessible, and therefore AI deployments will not easily live up to their promises.
Vast Deposits of Knowledge, but Limited Capabilities to Extract and Apply It
A while back we had the opportunity to work with a storied research institution. This institution has been around for over a century, working on cutting-edge research in multiple fields. They boast a monumental library with thousands (if not millions) of carefully produced and peer-reviewed manuscripts going back through their whole existence. However, when they tried to use AI to answer questions about their past experience, AI was unable to deliver the value that the organization and its researchers expected.
As we performed our discovery we noticed a couple of things that were working against our client: first, while they had a tremendous amount of content in their library, it was not optimized for leveraging as an input for AI or other advanced technologies. It lacked a significant amount of institutional knowledge, as evidenced by the absence of rich metadata and a consistent structure that allows AI and Large language Models (LLMs) to produce optimal answers. Second, not all the answers people sought from AI were captured as part of the final manuscripts that made it to the library. A significant amount of institutional knowledge remained constrained to the research team, inaccessible to AI in the first place: failures and lessons learned, relationships with external entities, project roles and responsibilities, know-why’s, and other critical knowledge were never deliberately captured.
Achieving Knowledge Intelligence (KI) to Improve AI Performance
As EK’s CEO wrote, there are three main practices that advance knowledge intelligence, which could be applied to organizations facing similar challenges in rolling out their AI solutions:
- Expert Knowledge Capture & Transfer. This refers to encoding expert knowledge and business context in an organization’s knowledge assets and tools, identifying high-value moments of knowledge creation and transfer, and establishing procedures to capture the key information needed to answer the questions AI seeks to provide. For our client in the previous example, this translated to standardizing approaches to project start-up and project closeout to make sure that knowledge was intentionally handed over and made available to the rest of the organization and its supporting systems.
- Real-World Application: At an international development bank, EK captured and embedded expert knowledge onto a knowledge graph and different repositories to enable a chatbot to deliver accurate and context-rich institutional knowledge to its stakeholders.
- Business Context Embedding. Taking the previous practice one step further, this ensures that business context is embedded into content and other knowledge assets through consistent, structured metadata. This includes representing business, technical, and operational context so that it is understandable by AI and human users alike. It is important to leverage taxonomies to consistently describe this context. In the case of our client above, this included making sure to capture information about the duration and cost of their research projects, the people involved, clients and providers, and the different methodologies and techniques employed as part of the project.
- Real-World Application: At a global investment firm, we applied a custom generative AI solution to be able to develop a taxonomy to describe and classify risks so that they could enable data-driven decision-making. The use of generative AI not only reduced the level of effort required to classify the risks, since it took experts many hours to read and understand the source content, but it also increased the consistency in their classification.
- Knowledge Extraction. This makes sure that AI and other solutions have access to rich knowledge resources through connections and aggregation. A semantic layer can represent an ideal tool to ensure that AI systems have knowledge from around the organization easily available.
- Real-World Application: For example, we recently assisted a large pharmaceutical company in extracting critical knowledge from thousands of its research documents so that researchers, compliance teams, and advanced semantic and AI tools could better ‘understand’ the company’s research activities, experiments and methods, and their products.
It is important to note that these three practices also need to be grounded in clearly defined and prioritized use cases. The knowledge that is captured, embedded, and extracted by AI systems needs to be determined by actual business needs and aligned with business objectives. It may sound redundant to say, but in our experience we find that teams within organizations are often capturing knowledge that only serves their immediate needs, or capturing knowledge that they assume others need, if at all.
Closing
Organizations are increasingly turning to AI to gain advantages over their competitors and unlock previously inaccessible capabilities. To truly take advantage of this, organizations need to make their institutional knowledge available to human and machine users alike.
Enterprise Knowledge’s multidisciplinary team of experts helps clients across the globe maximize the effectiveness of their AI deployments through optimizing the data, content, and other knowledge resources at their disposal. If your organization needs assistance in these areas, you can reach us at info@enterprise-knowledge.com.
Institutional knowledge is the sum of experiences, skills, and knowledge resources available to an organization’s employees. It includes the insights, best practices, know-how, know-why, and know-who that enable teams to perform. This knowledge is the lifeblood of work happening in modern organizations. However, not all organizations are capable of preserving, maintaining, and mobilizing their institutional knowledge—much to their detriment. This blog is one in a series of articles exploring the costs of lost institutional knowledge and different approaches to overcoming challenges faced by organizations in being able to mobilize their knowledge resources.