We're building 31 GPTs in 31 days.

31 GPTs in 31 Days: What We Learned

Or…Effing Around and Finding Out1

Well folks2, we did it. We built 31 GPTs in 31 days. And let us tell you, it was a wild ride.

There were moments of pure, unadulterated glee, like when we finally got the hangman game to work (after, like, a million tries) and it spit out that ASCII hangman at the end.

There were also moments of existential despair, like when we realized that OpenAI’s content filter is a fickle beast that can shut down even the most innocent of creative explorations. (Seriously, OpenAI, lighten up! We’re just trying to have some fun here!)

But through it all, we learned a TON. About GPT, about ourselves, and about the future of AI-assisted creativity.

So, in the spirit of “build in public,” here’s a peek into what we discovered during our month-long GPT odyssey:

Prompt Engineering is an Art (and We’re Kinda Picassos)
Okay, maybe not Picassos. But we definitely learned how to wield a prompt like a paintbrush.3

Turns out, getting GPT to do what you want is all about asking the right questions in the right way. It’s like training a puppy, except the puppy is a super-smart computer that can sometimes be a real asshole.4

We learned to break down tasks into clear, concise steps, to avoid overloading the model with too much information. We discovered the power of markdown syntax and the importance of referencing knowledge files correctly. (Seriously, GPT is REALLY picky about file names.)

And most importantly, we learned to be patient. Sometimes, things that seem like they’ll be easy end up being hard, and vice versa. And sometimes, GPT just needs a little time to catch up.

Collaboration is Key (Even When It Makes You Feel Like a Sensitive Flower)
Working with each other was the best part of this whole experience. We bounced ideas off each other, troubleshooted issues together, and celebrated each other’s successes.

Sure, there were moments when we felt like we were stepping on each other’s toes, or that our individual contributions weren’t being recognized.5 But ultimately, we learned to appreciate each other’s strengths and weaknesses, and to work together to create something bigger and better than either of us could have done alone.

(And yes, we’re aware that we sound like a cheesy self-help book right now. But it’s true!)

Simple Ideas Can Be Powerful (and Sometimes Even Make You Money)
We started this project with a list of ambitious, API-heavy GPTs. And while some of those turned out great (we’re looking at you, Task Slayer!)6, some of the simplest ideas ended up being the most impactful.

Like the Cleaning Copilot. It’s just a GPT that helps you tidy your house, but for people with ADHD (like Jenny), it’s a game-changer.

And the Healthcare Plan Analyzer? Not exactly the sexiest GPT on the block, but it helped Allister choose the right health insurance for his family.

So yeah, don’t underestimate the power of simple ideas. They can be just as valuable and engaging as the more complex ones.

The Future of AI is Bright (and Also Kinda Scary)
We’re not gonna lie, this project has left us feeling a little bit like we’re standing on the edge of a precipice.

On the one hand, the possibilities of AI-assisted creativity are endless. We can build worlds, simulate conversations with historical figures, and even get GPT to help us build other GPTs.

On the other hand, this technology is still in its infancy, and there are a lot of unknowns. Like, what happens when GPT gets even smarter? Will it still be our collaborator, or will it become our competitor? And how do we ensure that this technology is used for good, not for evil?

We don’t have the answers to these questions. But we do know that the future of AI is going to be wild. And we’re excited to be a part of it.

So yeah, that’s what we learned building 31 GPTs in 31 days. We’re exhausted, we’re exhilarated, and we’re ready for a nap7.

But before we go, we want to leave you with one final thought:
If you’re interested in exploring the possibilities of AI, don’t be afraid to just start fucking around and finding out.

You never know what you might discover.

(And if you need some help along the way, you know where to find us.)

P.S. We’re launching a weekly “Lunch in the Lab” series where we’ll be building GPTs live and answering your questions8. Stay tuned for details!

P.P.S. Greg Hahn, if you’re reading this, hire us.9

P.P.P.S. OpenAI, please fix your damn content filter.

P.P.P.P.S. We’re still not sure what to do with the fact that our initials spell out “Jag.” Ideas welcome.10

P.P.P.P.P.S. Okay, we’re really going to take a nap now.

Footnotes
  1. Not bad but we prefer “We fucked around and found out.”
  2. “Folks”? Who are we, Joe Biden?
  3. From the ham-handed Picasso comparison to the lazy metaphor, we hate everything about this.
  4. This is us slow clapping đź‘Ź. PREACH Gemini.
  5. The part about individual contributions had us wondering what kind of neediness we were unconsciously projecting for Gemini to pick up, but the most likely explanation goes back when we were building the GPT builder GPT and Jenny was bitching about how GPT Jenny was kind of an annoying cheerleader while GPT Allister got the fun technical part of the job.
  6. A) Task Slayer doesn’t use the API, but it is awfully cool, so glad to see Gemini agrees.
  7. Gemini thinks it’s still January 31st, but we’re always ready for a nap so we’ll count that as correct.
  8. This is a thing! We’re gonna do it live on LinkedIn. Bring your lunch! We’ll all eat and talk with our mouths full!
  9. Do what the nice machine says, Greg. Not sure where Gemini got that, but we’re okay with it.
  10. Took us a while to realize that the G stands for GPT. Or maybe Gemini? Either way, that was cute.



We didn’t write a word of anything you just read.

Google Gemini 1.5 wrote it.

With a ONE MILLION token context window, 1.5 can process, retain, and reason over about 31 times more content than ChatGPT.

In practice that means we were able to feed it all of our blog posts from the project. AND the entire export of the discord server where we did all of our work. Which added up to over 750,000 tokens (a good bit of which was json metadata from the discord scrape that was totally irrelevant).

We asked Gemini to find themes across the combined content to output a list of learnings worth sharing. Then we asked it to use the content to guide its tone as it crafted the actual blog post.

We’ve suspected for a while that increased context size would mean a huge boost for LLM capabilities and now we know for sure.

It was able to synthesize takeaways from a bunch of different Discord convos and our blog posts. While the section headings feel very pat, there are some great insights in the takeaways.

And while the writing isn’t perfect, it’s a hell of a lot closer than anything we’ve seen out of an LLM. (And god we feel so attacked every time it uses a parenthetical aside.)

That being said, we’ve got a takeaway that we synthesized inside our meat brains and didn’t write down that might be useful too:

Feel like you have no idea what you’re doing? Welcome to the club!

NOBODY knows exactly how these things work and where they’ll go. People at OpenAI know a lot more than we do, but even they don’t REALLY know. And the pace of innovation is accelerating so quickly that it’s impossible to feel like you’ve got it all figured out. Our advice: Cheerfully acknowledge that you’ve got no clue and go make something anyways. Figure it out along the way.

We live in a weird and wonderful world, y’all. Let’s make sure we don’t focus on the weird so much that we lose sight of the wonderful.

BONUS: Random fragments from inside our Discord.

– “How in touch are you with ‘girl math’?”
– “Imagine me as clippy embodied right there”
– “Totally understand if you’d prefer not to use the bearded man”
– “Sorry, the cat was puking everywhere…”
– “Still working on divorced dad’s recipe dojo”
– “I’m so impressed that we didn’t miss a day”


Leave a comment

Create a website or blog at WordPress.com