Another thing I'd be interested is solutions for b...
# thinking-together
j
Another thing I'd be interested is solutions for bidirectional text transformations (for source code). Ideally a system in which you could define transformations once, and get both AtoB and BtoA transformers
w
For me the question is "continuation" (analogous to "analytic continuation"). For source code transformations, I imagine robust automatic error handling stemming from the fact that most transformations are somewhat compositional, I mean the action on the whole text is sort of a combination on the action of the parts
f(text) = f(text.some_part) ** f(text.some_other_part)
. In as much as LLMs "think" and "are creative", I'd say it comes from the attempt to find the "conceptual space" defined by the training text. Here the core transformation is from some chunk of text to the next chunk plus some attention data structure that gets updated along the way.
j
Yeah, I guess a system that would be able to transform very different languages back and forth would probably have to rely on machine learning. And possibly never be 100% accurate. But I was wondering if there are systems for translating somewhat similar languages by defining the translation once. I can imagine constructing the same AST from different (but similar) languages and then being able to print that AST to either language, but that's not defining the translation once.