Graph Inspired Veracity Extrapolation (GIVE) is a novel reasoning framework designed to enhance the performance of large language models (LLMs) in knowledge-intensive tasks by integrating sparse external knowledge graphs (KGs) with the LLM's internal knowledge.
The main insight of GIVE is that even when working with incomplete or limited KGs, it's possible to improve the reasoning capabilities of LLMs by using the structure of the KG to inspire the model to infer and extrapolate potential relationships between concepts. This approach facilitates a more logical, step-by-step reasoning process akin to expert problem-solving, rather than relying solely on direct fact retrieval from dense knowledge bases.
The GIVE framework operates in several key steps. First, it prompts the LLM to decompose the query into crucial concepts and attributes, extracting key entities and relations relevant to the question. It then constructs entity groups by retrieving entities from the KG that are semantically similar to these key concepts. Within these groups, GIVE induces intra-group connections using the LLM's internal knowledge to explore relationships among similar entities. For inter-group reasoning, it identifies potential relationships between entities across different groups by considering both the relations mentioned in the query and those present in the KG.
Additionally, GIVE introduces intermediate node groups to facilitate multi-hop reasoning necessary for complex questions, effectively bridging gaps in sparse KGs. By prompting the LLM to assess and reason about these possible relationships—including counterfactual reasoning where the model considers both the presence and absence of certain relations—GIVE builds an augmented reasoning chain. This chain combines factual knowledge from the KG with extrapolated inferences from the LLM, enabling the generation of more accurate and faithful responses even when the available external knowledge is limited.
Great insights by @UCBerkeley and @penn
00:00 Integrate LLM and Knowledge Graphs
01:06 Think on Graph (ToG)
05:58 ToG GitHub code repo
06:30 GIVE Graph Inspired Veracity Extrapolation
09:33 GIVE vs Harvard Knowledge Graph Agent
10:33 Why RAG fails in Knowledge Graphs
11:40 Example of GIVE in detail
16:16 Compare ToG to GIVE
All rights w/ authors:
THINK-ON-GRAPH: DEEP AND RESPONSIBLE REASON-
ING OF LARGE LANGUAGE MODEL ON KNOWLEDGE GRAPH
https://arxiv.org/pdf/2307.07697
GIVE: STRUCTURED REASONING WITH KNOWLEDGE
GRAPH INSPIRED VERACITY EXTRAPOLATION
https://arxiv.org/pdf/2410.08475v1
Towards Trustworthy Knowledge Graph Reasoning:
An Uncertainty Aware Perspective
https://arxiv.org/pdf/2410.08985
#airesearch
#aiagents
#harvarduniversity
#berkeley
#knowledge
#llm