Recent Deep Dives

Lets start with LLMs

Been thinking about LLMs as producing shadows rather than understanding - similar to Plato's cave allegory. The outputs aren't grounded in true comprehension, but they're useful for structuring thoughts and generating starting points. When code is involved, you can immediately test whether the shadow works in reality.

I recently tried wiring an LLM into my Obsidian vault. The idea was simple: let the machine clean up my messy fragments and spin them into a blog post. And it worked. The draft was polished, complete with light jokes and confident summaries—the kind of thing I could have published with a click.

But when I read it back, it didn’t feel like mine. It wasn’t bad, just strangely hollow. I asked myself, why?

The more I dug in—reading theory from The Death of the Author to recent work on “post-artificial” writing—the clearer it became: authorship has never really been about typing but about owning the choice to say, “this represents me.”

An LLM can generate sentences, but it can’t decide which ones are worth sharing. That responsibility is still mine.

So maybe AI isn’t diminishing authorship—it’s highlighting what was always true: the author isn’t the one who wrote every word. The author is the one who decided those words deserved to be in the world.

Current Technical Deep dives.

Been working across a lot of different areas lately. Here are some topics I will expand more in the future.

Editor Migration and Back

Started with WebStorm having serious performance issues in our TypeScript monorepo. Went through the whole journey - tried Zed, VSCode, ended up in Neovim for a while. The Neovim setup was solid once configured, but IntelliJ fixed their performance issues, and I'm back. Sometimes the best tool is the one you already know. I got a paid Cursor subscription from my company, but the UX is terrible and I couldn't get used to it. I used it to explore the new codebase, canceled the subscription after discovering Claude Code.

OAuth Implementation Deep Dive

Been implementing both OAuth 1.0 and OAuth 2.0 flows for various integrations. OAuth 1.0 is particularly interesting when working with legacy APIs - feels antiquated, but it's still widely used.

Working with Ory Hydra for OAuth 2.0 has been good!

Ory Stack Integration

After extensive work with Ory products (Kratos for identity, Hydra for OAuth), they're genuinely well-designed.

  • Kratos: Handles identity management and authentication flows
  • Hydra: OAuth 2.0 and OpenID Connect server
  • Oathkeeper: Identity and access proxy

The email verification flow in Kratos needed work.

OIDC Integration Challenges

Been working with OpenID Connect integrations across multiple identity providers. Each provider has its quirks - mapping claims from different structures, handling array-based customer references, dealing with redirect URI whitelisting for different environments.

One interesting part is building a mapper that handles partner lists from OIDC providers:

local claims = std.extVar('claims');
local partnerListFirstValue = 
  if partnerListHasItems then 
    claims.raw_claims.partnerList[0]
  else 
    null;

When you have multiple customer references coming from the SSO provider, you need to iterate through them all to find a match in your system. This gets complex when dealing with post-registration hooks across different services.

Mirrord for Kubernetes Development

Started using mirrord for local Kubernetes development. It's essentially magic, runs your code locally while it interacts with remote pods as if it were running in the cluster.

The configuration is extensive, but the basics are simple:

{
  "target": "pod/bear-pod",
  "feature": {
    "env": true,
    "fs": "read",
    "network": {
      "incoming": "steal",
      "outgoing": true
    }
  }
}

What mirrord does:

  • Steals or mirrors incoming traffic from the target pod
  • Routes outgoing traffic through the remote pod
  • Syncs environment variables and filesystem access
  • Handles DNS resolution through the cluster

The HTTP filtering is particularly useful - you can steal only specific requests based on headers or paths while letting others pass through to the original pod. This means you can debug production issues without affecting all traffic.

Database Operations

Moved several databases between regions using PostgreSQL tools. The key flags that save headaches:

pg_dump --clean --if-exists --format=c --dbname="..." > backup.dump
pg_restore --clean --if-exists --no-owner --no-privileges --dbname="..." backup.dump

The --no-owner flag is critical - prevents permission issues during restore.

Infrastructure Challenges

Hit an interesting issue with Hydra where the login_challenge parameter was too large for our ingress controller. Solution required adjusting buffer sizes:

proxy-buffer-size: "128k"
large-client-header-buffers: "4 64k"

Also discovered that Hydra's system secrets and clients are tricky (I'm looking at you, Hydra-maester).

🤖Partially generated with Claude Code