[ ← Back to Articles ]

I Don't Know What I'm Doing. Yet.

March 20, 20265 min● ● ●
AILearningPersonal

I work with databases for a living. I write Java. I know what a heap file is, what a B-tree does, why your query is slow. I am comfortable in the world of things I understand.

AI made me uncomfortable.

Not in a dramatic way — I wasn't afraid of it replacing me. More like the feeling of being in a conversation and missing every third reference. Everyone was speaking a language I had the alphabet for but not the grammar.

So I decided to learn it. Not by watching YouTube videos at 1.5x speed. By building things, breaking them, and writing down what I find.


What I Actually Knew on Day Zero

Not much. Here's the honest list:

  • Neural networks are loosely inspired by the brain
  • GPT means Generative Pre-trained Transformer
  • Transformer ≠ the robot from the movie
  • Something something embeddings
  • Something something vectors

That's it. That's the inventory.


The First Question I Asked Myself

What does an LLM actually do?

Not "how does it work internally" — I'll get there — but functionally: what is the thing doing when I type a prompt and get a response?

The answer I landed on: it predicts the next token, over and over, until it decides to stop.

That's it. There's no understanding. No reasoning in the human sense. It's a very sophisticated pattern-completion engine trained on a significant fraction of human writing. The magic — and there is something that feels like magic — is that pattern completion at scale starts to look a lot like thinking.

This reframing helped me. I stopped anthropomorphizing it and started thinking of it as a system I could reason about.


The First Thing I Built

A script. Embarrassingly simple. It calls the Anthropic API, sends a prompt, prints the response.

import anthropic

client = anthropic.Anthropic()

message = client.messages.create(
    model="claude-opus-4-5",
    max_tokens=1024,
    messages=[
        {"role": "user", "content": "Explain what a transformer is. I'm a backend engineer."}
    ]
)

print(message.content[0].text)

The response was good. Better than I expected. And then I spent two hours just talking to it — asking progressively more technical questions, pushing on the edges, seeing where it confused itself.

That was day one.


What I Want to Build Through This Series

Not a tutorial. There are enough of those. This is a log — a working document of someone learning in public.

Each entry will be one thing I learned, one thing I built, and one thing that confused me. Honest accounting.

The eventual goal: build something real with AI integrated into it. Not a demo. Not a wrapper around a chat UI. Something with actual engineering decisions, tradeoffs, and failure modes.

I'll figure out what that is as I go.


Next: what actually is an embedding, and why does everyone keep talking about vectors?