I came across your thread
just after reading
https://writings.stephenwolfram.com/2023/06/prompts-for-work-play-launching-the-wolfram-prompt-repository/, so recency bias yada yada, but this does strike me as a situation where an LLM could help, particularly with learnability.
The prompts I saw in the article that make me think it are:
•
Anonymize replaces parts of the text that look like identifiers with placeholders
•
CSV causes the LLM to structure its answer as CSV
They lead me to believe that a prompt could reformat its input to match your structure(s?). So, no change when it matches, but a humanlike reinterpretation otherwise.
On one end of the interface spectrum, you might allow free text. On the other, you invoke it whenever there’s a parse error, to suggest an interpretation that
does match your desired structure. Either way, showing the parse would teach the language and potentially make it faster to write.
Not saying it wouldn’t take a ton of fiddling, since I don’t know. Just throwing it out there. :)