BG-001: On the use of gen-AI and LLMs
Three things led me to write this article:
The recent AI disclosure saga involving Steam and Epic’s CEO.
An RFD about LLM use from Oxide Computer, a company whose engineering culture I really like.
My own thoughts (and fears) about how people will react to how I use LLMs in certain pre-production phrases of “Have a sip” (yes, this is also that kind of disclosure).
Since I still have a day job as a software engineer in infrastructure, it’s very hard to totally ignore “gen-AI” (or LLMs, specifically). And what’s more, because “Have a sip“ is centered around an intelligent, self-conscious robot and also touches on how AI impacts on ordinary people’s lives, it’s a topic to talk about early.
This is explored in more depth below.
Values of gen-AI usages
I generally consider gen-AI in its current iteration more like a tool. Thus, I think it’s worth looking at its usage like other tools: in terms of the values it brings to the table.
What comes to my mind is very similar to what (I think) Oxide’s leans towards:
Using gen-AI does not replace human’s responsibility and role in creative endeavors, of which writing, drawing, composing and even writing software are parts of.
We have to be mindful when using gen-AI, for while its outputs can sharpen our thoughts, over-reliance can potentially atrophy the “thinking muscles”, which is our most important tool.
Gen-AI (and specifically LLMs) are probabilistic tools that can be guided by human-designed workflows, which either comes from the gen-AI companies or from us. Because of this nature, it’s nigh impossible to expect their output to rigorously capture the full gamut of nuances and emotions that we need to handle/express.
Leaning too hard in gen-AI can erode human trust, which are essential to our teamwork, as well as the relationships we have with our players and other partners.
When/where and how to use gen-AI
I will separate gen-AI in general and LLMs a bit here, because there are cases where they are used for distinctive purposes.
For research
I strongly believe that all great work is the result of spending an unreasonable amount of time on problems others simply dismiss (but not necessarily the other way around, unfortunately). What this means is that a great deal of research and experimentation must happen for us to write good stories, draw interesting artworks, compose pleasing music or write robust code.
In the past, when (Google’s) search was not littered with SEO ads, it was the de facto research tool on the Internet. Before that, we would go to the library, read curated magazines, talk to experts, go to classes, etc. These are all research activities.
LLMs, due to being new, do not exhibit this issue. Thus, its large corpus of material is very useful if we need to do any kind of research. Though, I would argue that it’s better at text-based search, which is useful for writing and software engineering. I have yet to find any prominent image-based or melody-based search tool.
Still, this is just a phase. Should there come a time when even LLMs are littered with ads and engineered contents, we should do fine if our research skills remain strong. This means honing basic skills like fact-checking, finding and navigating specialized publications (research journals, community forums, user groups, etc.), as well as reading-comprehension.
For editing, reviewing
LLMs presents two very powerful paradigm:
An instant feedback loop, trained on a large corpus of human behaviors on the Internet.
A dataset of what many people considered to be “good”, in the sense that it’s popular (instead of being an absolute good, if there is such a thing).
This allows them to assist as editors and reviewers in the form of things (e.g: spelling, grammar, structure, flow, etc.). However, it does not offer much beyond telling us what’s popular, when it comes to the essence of things.
To produce good work, we must perfect both the form and the essence.
For creating
This section is really long, because I think this is where the most debate about gen-AI usage will be. Plus, I have personal anecdotes to tell.
One of the hardest tasks for any creative person, in my opinion, is to create something from a blank state. Here, gen-AI is very powerful here, as it can leap into the unknown with just a little bit of prompting. It’s like sending a scout into the fog of war, so that we can get the lay of the land (and spot important activities, sometimes).
If used well, this essentially facilitates faster prototyping and visualization of concepts that would have taken considerable efforts to get started.
However, I would caution against using gen-AI beyond prototypes and throw-away sketches, because it’s not going to match exactly what we have in our mind at the end of the day. Plus, relying too much on prompting derives ones from the actual production skills and knowledge that we will need to finish the work.
I think my first hand experiences here will help, because I have experienced this from multiple angles: a newbie (in pixel art), an intermediate hobbyist (in writing) and an expert (in software engineering). Because of the timing, I have perspectives for both pre/post gen-AI rise.
As a newbie
When we are new, we lack both the understanding of the domain, as well as the language and context to ask questions. For example, when I first started drawing pixel art, I struggled to understand how to pick colors for shades. It was only after learning about color ramps that I had a better idea on the how and why. It turns out that there is both a logical process and an artistic choice behind these. During this phase, it was actually being able to find good books or guided learning experiences (e.g Pixel Logic, Pixel Art Academy: Learn Mode). They gave me a more holistic lay of the land, which helped a lot in mapping out how I should improve my skills.
During this time, I also dabbled in using Stable Diffusion and DALL-E and even specific pixel art gen-AI tools like PixelLab.ai to generate some work, but the result has not been dissatisfactory. I would chart this partly due to not knowing how to prompt and partly because what I have in mine is very unpopular (e.g: non-human character, scene with very specific compositions, etc).
As an intermediately-skilled person
When we are somewhat knowledgeable, it’s actually the practice bits that will get us to produce better results. Using gen-AI will not help us here.
Before gen-AI, I had already written a fair bit of things (mostly technical stuff, but also the script for a music play and some fictions). Also I read a lot of fiction, so I roughly knew what good writing should feel like, even if I couldn’t produce to that degree.
When chatGPT came out, I was amazed at first in how it can write very convincing stories albeit with me prompting heavily. However, I realized it was a clutch once I started on writing for “Have a sip”, as I ended up spending far more time prompting to reach a particular feeling than actually writing (maybe I don’t know how to prompt it to write mono-no-aware stories). But at that point, I realized a more detailed prompt doesn’t actually help. To save time, I should just do the dirty work instead. And over the 1+ year of writing and re-writing a lot of parts in “Have a sip”, I think things have gotten better, to the point where I can sit down and write an acceptable piece in 1-3 weekend evenings (around 4-12 hours of writing, excluding the thinking time). This article, for example, took about 4.5 hours.
Meanwhile, continued use of chatGPT to generate things start to feel weird, beyond a certain point (especially with the chatGPT 5.x). Sometimes it would throw me back a technobabble that had neither head nor tail. Or when it works, it would sometimes generate stories so clique that it’s cringy to read, not the type of nuanced writing that I would have enjoyed.
Looking back, I believe the act of writing frequently helped more. With a good and fast feedback loop, which LLMs can somewhat provide, it’s easier to do deliberate practice and improve our own skills. This pays dividends more than any advance prompting technique can give, because now we don’t need an extra layer to translate our ideas into reality.
As a person with expertise
When we have reasonable expertise, LLMs are time-savers. This applies to my day job as a software engineer. In most cases, even at 200 WPM, I can’t match the output of a few simple prompts in chatGPT at all. And sometimes I would spend far too much time thinking/procrastinating that I don’t have anything to show for output. This is where I think gen-AI shines, as I can give it pretty specific instructions on what I want, get the generated output and do a bit of editing to arrive at exactly what I would need. However, to get to this point, I had already spent 10+ years writing a crap load of code, reading both good and terribly-written programs (both my own and others), as well as building enough intuition on what worked and what didn’t.
This is where there is a dividend in learning how to prompt, actually, because it allows us to exert more precise control on the output of LLMs. And if we think about it, this is no different from learning a programming language/DSLs. Then, the more we understand the underlying mechanics, the better we will be at controlling the outcome.
A note on ethics/copyright
This part could be sensitive, since as creative people, we also need to make ends meet. And it’s quite clear that gen-AI is eating our shares of the pie.
I’m somewhat lucky that I don’t have to compete with gen-AI for my day job yet. And for my work on “Have a sip”, I have shifted from a protective, commercial-minded mindset to more of a hobbyist one, so I don’t mind things being scraped and copied as much.
However, it’s pretty clear that these gen-AI techs are being trained without people’s consent, whether we like it or not. And it’s a real issue that it’s taking away jobs and growth opportunities for various people, especially folks who make a living producing creative works.
So it’s a conundrum: as people keep paying for gen-AI and work that leverages gen-AI, things will be increasingly harder for us creatives. But at the same time, it’s not possible to escape from gen-AI usage, especially because it does have some real benefits.
I’m not so visionary that I have a solution here. Personally, I feel that applying the canceling culture on anything AI-generated is just venting, as we can’t fight the tides alone. Plus, I believe a certain group of creatives, myself included, benefits from being able to leverage gen-AI to do more things with less resources and time. However, I also don’t want to fully embrace generated content, which has started to flood the Internet.
I just hope that as we struggle against this new phenomenon, we will be able to come out of it stronger instead. And to that end, I think it’s better to be more kind to those we feel are genuine human creators, while being more critical of generated content.
If it’s any consolation, I actually feel that it’s the very act of using gen-AI to assist with production of “Have a sip” that left me feeling more confident in my own skills. As I became more aware that, without significant investments, gen-AI wouldn’t be able to produce work that is close to my expectations.
Admittedly, these thoughts might change in the future, as more powerful models are rolled out, or if someone uses gen-AI to actively steal my work. But if someone really wants it to happen, it would happen regardless of if gen-AI was involved or not.
In the end, I don’t think we can fight the tides of our time. But at least within BitsGofer Studio, I will remain committed to ship human-crafted products and be honest about how it’s made. The audience will be the judge on whether the results are hollow or genuine.
Conclusions
As with many of my musings, there isn’t a clear outcome from this article.
However, as this gets applied to BitsGofer studio, I think what will stay is that:
We don’t reject gen-AI outright. Instead, we will learn to leverage it well.
We will still work on honing our creative muscles.
We should be honest about how we use gen-AI to our players and partners, even if it hurts reception at first.
This is because we value human trust, and that means not lying about how we do things.
A surprising outcome
Since I have been thinking about this AI debate for many months now, it actually produced something that I’m quite pleased about:
In “Have a sip”, there will be a character (Rebecca) who is an aspiring musician, with a love-hate relationship with AI-generated music. A lot of my musing here will actually fuel that aspect of her character.
Try to copy that , LLMs :)
P.S. Forgive my StarCraft references, if you notice them.
A friend of mine mentioned the idea that using gen-AI is like “sending a scout to the fog of war”, and it immediately trigger me to think about “form” and “essence”, haha.


It's interesting how you framed gen-AI as a tool, completely agree on the human responsibility. What if future iterations could actually help develop our thinking muscles, rather then just sharpening thoughts? Really insightful perspective!