In my demo, I made the statement "... t2t doesn't ...
# share-your-work
g
In my demo, I made the statement "... t2t doesn't need the full power of OhmJS ...", but, I didn't clarify. For t2t - text to text transpilation - primarily, you need to pattern-match incoming text, then emit text based on the input. OhmJS parses incoming text, then gives you the full power of JavaScript to do anything you want with the parse tree. For t2t, you don't need to resort to class hierarchies, functions, closures, etc., etc. You primarily need to pattern match, then, create and modify text. In addition to OhmJS' ability to pattern-match, Javascript's "template strings" are about all you need - the ability to create text and to interpolate text from the tree walk of the parsed input. This seems to be unnecessarily restrictive, but, turns out to be quite powerful and mind-freeing. Fewer options -> less clutter -> increased ability to think about interesting issues. After all, "simplicity" == "lack of nuance", and, my goal is to simplify DX. [Infrequently, one needs to do a tiny bit more (like gensym() a new symbol and leave it on a scoped stack for use during the tree-walk), so I provide a way to break out and call a Javascript function, but, this kind of power is not needed in most cases. I guess that, in the future, I will restrict this some more, but, I'm still experimenting].
Given this simplification, I easily invented a nano-DSL to handle the string building bit. I call it RWR (for ReWRite). RWR is, itself, just t2t - it transpiles the RWR spec into Javascript that is compatible with OhmJS.
k
What are your use cases for t2t? Code transformation, data transformation, or both?
g
I’ve used t2t for both, but, emphasize code transformation because I feel that the idea of code transformation is under-utilized. It drastically changes the realm of compiler writing. One can create new languages, but does not need to write whole compilers (simply lean on existing compilers). When one can create new languages in minutes/hours instead of months, it changes one’s approach to problem solving, e.g. one can create multiple nano-DSLs on a per-project basis (“awk” and REGEX on steroids) instead of building general purpose languages. It makes it reasonable to create S/W Architecture languages that describe Design Intent instead of Implementation and Production Engineering. To me, Python, Common Lisp, Javascript, Odin, (Haskell, Rust, …), etc., are just assemblers for HHLLs (higher-than-high-level languages). This is like Lisp macros and Functional Programming done by pipelining instead of cramming all of the concepts into a single hair-ball of complexity.
k
What I find most attractive about t2t for code is that I can look at the intermediate code. The idea of taking many small steps towards the goal rather than a big obscure one sounds tempting (though I haven't ever done multi-step t2t).
1
g
There are side-benefits, too. Like, if you own the transpiler, you can easily insert tracing/debugging/instrumentation tidbits. Like, "macros" for textual languages instead of only for list-based languages (like Lisp, Scheme). A down-side is that, to really do t2t in small steps, you need to emphasize machine-readability (easy to do), but, machine-readable code not= human-readable code (machine-readable code is more verbose and repetitive, but, understandable to humans, albeit boring and TL;DR). FYI at one point, I got up to 15 steps in building a Ceptre-to-Prolog transpiler before I veered off in some other direction. I would be happy to kibitz if anyone wants to try out the stuff I've got - I imagine that it ain't packaged in pristine shrink-wrapped form yet...
💡 1