Is anybody using gpt to autogenerate blog posts or...
# of-ai
t
Is anybody using gpt to autogenerate blog posts or other educational media?
I've been messing around with generating simple blog posts in substack, so want to see what others are doing that's like this
g
FYI - I’m thinking on how to sketch out ideas in a “mind map” (Kinopio) and have GPT ghost-write a fleshed out post from that (this would also be good for README.md files in dev repos). I had some success with jamming Kinopio-generated JSON into KAGI Summarizer, but hit a road-block when I tried to pour the same JSON into ChatGPT-3 (prompt too long, and, won’t accept URLs). I’m thinking of trying again with Llama-2, but, thus far haven’t gone down that learning curve... I, also, have had success with writing a terse paragraph, then having ChatGPT turn it into a chapter for a “book”. Prompt: ‘edit this chapter and flesh it out ’ <followed by the terse paragraph text>), result: successfully generated chapter(s), but, lots of reading for me. Approving, editing, etc. - stuff I don’t really want to be bothered with. I will gladly share, but, it’s too long to post here. It looks like feeding short articles / chapters to ChatGPT will work, but I wish for something even simpler. I guess I could dump my thoughts into Kinopio, then dictate chapters into text files while prompting myself by looking at the Kinopio thoughts and recording my ramblings (say, using speech-to-text). If my other experiments fail, I will fall back to this strategy. Along the way, someone mentioned ‘type.ai’ to me. It looks interesting but I haven’t had time to check it out. An experiment with type.ai managed to produce a reasonable summary of the same Kinopio thought-map as above, but, the experimenter typed the text for each thought-bubble in manually. I would enjoy hearing/reading about any experiments you try. In fact, if you are successful, you can have GPT generate a blog post about how you did it :-).
j
Why should we wish to inflate our writing with AI?
g
IMO, it comes down to a projectional-editing / syntax-is-cheap argument. Experts prefer to compress information, novices need to be reminded of what each part means. For example, the equation of a line can be expressed in expert notation as: y = mx + b whereas, novices (e.g. school children) might want to see: y = slope times x + y_intercept The underlying idea is the same, just the expression (syntax) changes. Another example: IMO, Common Lisp has one-of-everything. Why not express ALL programs in CL syntax, RPN? Why do we insist on using other syntaxes, like Python or Haskell? (N.B. CL already has a syntax for type information, also RPN). Another example: why not use even-more-concise binary lambda calculus notation for everything? Quick, what does this program do?
λab.b(λcde.c(λfg.g(fd))(λf.e)(λf.f))a
(https://justine.lol/lambda/) Or, why do we bother with opcodes? Why not just use transistors? This question brings up an interesting issue. Is written language an innately expressive form, or just something that we’ve learned to live with, biased by our available media (e.g. clay tablets, paper, graphite, rubber)? I find myself wanting to watch a YouTube rather than reading a paper...