I'm working on this stuff in my FoC project and in my day job. More on the knowledge extraction and symbolic tool use, than on having models work directly on the graphs. That seems a little... black boxy for my purposes. The basic idea is give natural language to the LLM, have it output a symbolic representation of what it learns, and have it integrate that symbolic representation with what it has already learned elsewhere. Then have a question-answering agent with the ability to browse & query the resulting knowledge graph, using a combination of symbolic reasoning, graph analysis, and semantic similarity to extract the most valuable context for the question. So far, we've done each of these pieces in isolation, trying to find a decent way to combine them and see if it improves performance on certain tasks.