Getting Ahead of AI by “Building Your Own Lightsaber”
One concept I’ve been toying with lately in the world of trying to cope with the rise of generative AI and large language models (LLMs) is the idea that I should get in front of it by learning — to use a metaphor from the Star Wars series — to “build my own lightsaber”.
In the Star Wars movies, building one’s own lightsaber was a rite of passage for the young Jedi. It was mentioned in the early films as well as the later ones. Anakin was a proficient engineer, as was Luke, which Darth Vader noted. It was important in the early films, even if it didn’t always merit emphasis. But it always resonated with me.
In the world of AI, as in the world of the Jedi universe, I think learning to build your own tools is the best way to stay competitive. Conceptually, this is what I mean by “building your own lightsaber”. In the AI world, it means something between being just a passive user of AI tools and a researcher at the tip of the spear.
Levels of Involvement in the AI World: From Skeptics to Researchers
Let’s look a little at the levels of involvement of people in AI. The four levels I think of are: 1. Skeptics, 2. Passive users, 3. Builders, and 4. Researchers.
First, there are the AI skeptics. Most people are still skeptical about the rise of AI. They’re wary that AI is taking jobs and stealing content. The thing that caught many off guard is that it’s attacking people we didn’t expect — creatives and artists, and anyone who blends art with science, including researchers, programmers, and other content creators.
It’s safe to say that over the last couple of generations, people have been more fearing the rise of robotics. People doing repetitive manual labour have long known that robotics and automation would come for them, ever since the beginning of the industrial revolution. While that has been happening, progress has been gradual. Meanwhile, we’ve all been surprised that AI has first come for the people warning us about the rise of robots.
Beyond skeptics (a population that will dwindle) are passive users. Aside from the skeptics, the vast majority of people are eager users of AI. These users have come to accept that AI will simplify our lives by making it more efficient to do jobs that we’re already doing.
They (we) know that AI can help us do analytics, draft content, create images, do coding, and solve problems that are at least semblances of problems we’ve had in the past.
These users of AI include anyone who uses ChatGPT to try to draft a response to a Reddit comment, uses a Copilot automation to help edit documents, or uses a new app to do something like draft a LinkedIn post to promote engagement. Users of AI even include early adopters who scour the latest AI apps on theresanaiforthat.com and try to do things more efficiently.
At some point, the group of “users” will broaden to “anyone who does anything with software”. For example, if you use Adobe Photoshop, it’s going to be hard to start using it without using generative AI, because generative AI is already everywhere in its most basic functions like “cut” or “crop”.
Most people will remain just users of AI, because the idea of creating an AI is so daunting. To most of us, LLMs are inscrutable and complex. Yes, we broadly understand how they work — they’re trained on existing information and they essentially regurgitate that to present what appears as new information.
It’s a broad topic — this can mean anything from creating an assistant in ChatGPT all the way through to hosting your own LLM trained on your own dataset. But even the beginning part of that is daunting.
At the tip of the spear are researchers in the field of AI, LLMs, and other related sciences. They release papers, design models, and generally work for large organisations like companies and universities.
If you want to be a researcher in AI, then of course that’s amazing. But I think it’s a long and difficult course, and if you’re already past your professional prime and don’t have the time and budget to start something new, then it may be a bit beyond you.
Besides this, it’s intellectually demanding. The idea of creating AI is very foreign to most people. Most people don’t even really fully understand how AI works, so how can we think of ourselves as somebody who could create one?
But there’s a middle ground, and this is where I think of the “build your own lightsaber” metaphor, and this is where I think builders lie.
The Jedi in Star Wars all had to become proficient at building their own tools — but they didn’t invent the tools or underlying technologies. They didn’t invent the concept of the lightsaber, didn’t invent the electronics or metallurgical techniques, but they did use those invented technologies to build things they could themselves use.
The builder level is where I think people should aim to be to remain competitive. This means, for example, that while I might not design the next large language model technology or even innovate on existing ones, I should be able to install, configure, and train one myself. I should be able to create my own AI tools to simplify and make my own life more efficient. And I should be able to use AI to write code to improve my own AI tools.
Becoming an AI-leveraged human is a modern incarnation of what Tim Ferriss originally described in his book “The 4-Hour Workweek.” In that book, Ferriss described how people who run businesses often get overwhelmed by bureaucratic and repetitive tasks. He outlined steps for defining those processes and outsourcing them to low-cost workers in emerging markets.
Well, that entire process of outsourcing can be re-imagined using LLMs and other AI tech. But the bonus is that the amount of leverage increases exponentially.
How to Get Started in Building
The process is simply this: Understand what you’re doing that can be enhanced through AI leverage and then begin using the tools to create that leverage. Then, begin creating your own tools to enhance that leverage even further. The process is recursive.
The more leveraged you become, the more you realize that you could leverage yourself. And the more interesting tools you can start to create to further enhance your productivity and impact.
Yes, at some point we become hamstrung by the limitations of LLMs. But that technology will progress quickly, as it already has over the last few years.
And what’s most interesting is that we don’t really know where the limits of this recursive process of optimisation are. Nobody knows exactly how leveraged they can be. Can you automate yourself to the point of becoming an entire company? Possibly. Could you become an entire government? Hasn’t been done yet, but why not? I’d vote for AI leaders over most of the current crop (including the leaders) already.
An implication of not knowing the scale of how far AI-driven leverage can go is not knowing exactly where this path is going to take us as individuals. While you know, for example, you must get ahead of AI, you don’t know exactly how to do it, nor do you know exactly why.
While you may be able to guess that in future jobs you’ll be expected to be conversant in AI, you don’t know exactly what you’ll need to know. And you don’t know if it’ll mean just “getting a job in AI”.
But not knowing the exact destination in an AI-leveraged world isn’t an excuse for not getting started. And there’s no better way to get started than by simply exploring how to use AI to better do the things that you’re already doing or perhaps want to do.
Some personal examples
For example, I’ve always had a few apps in the back of my mind that I’ve wanted to make, but I haven’t wanted to fork out thousands of dollars to a developer to get them to do it. What do I do? Get AI to get started. How do I get started? Well, ask AI.
It started by me making a WordPress plugin using ChatGPT, plus a bit of my own programming debugging, to do something that I needed to do for my website Discover Discomfort — playback files in line with text, for example, in languages.
Such a plugin didn’t exist in the WordPress plugin repository, so I knew that I had to make it myself.
The next thing I wanted to build utilized a different aspect of that AI — it actually called open databases. In my language study, I also need to build language decks that I import into Anki, a language deck software.
Creating and managing my own Anki decks is a laborious process. So I made a simple one-page application that takes my notes from language class, creates Anki decks for direct import into Anki so I can use them in my language study.
It has gone on from there. I don’t let my imagination run wild, because I don’t want to start architecting projects that are much greater than my capabilities of managing them.
After all, managing an AI-based developer means that I still have to stay on top of the overall architecture of the project and understand the code that’s being used. Sometimes OpenAI and other AI platforms create errors that they can’t resolve themselves, because they don’t understand conceptually what they’re doing — they’re just regurgitating code seen elsewhere.
But the process is showing me what kind of tools I need in architecting my own software. Recursive debugging, for example. The necessity of various roles in my application development framework — project manager, architect, and developers of various parts of the framework.
I can already foresee other interesting and difficult-to-conceptualize aspects of the development process, like pricing, design, user research, unit testing, and all kinds of things that I know I normally have to get people to do or friends to help me with.
As I mentioned, I don’t know exactly where this process of building my own lightsaber will take me. At the very least, though, it’ll teach me something about these technologies that we will have to get ahead of.
And as a bonus, I’ll have tools that I’ll actually be able to use in my life. As for the future implications — we’ll have to wait and see.