# Jasper Rexford -rexford.dev > Self-taught programmer working on AI integration, reverse engineering, and practical technology. ## About Self-taught programmer based in London, UK. Building tools and systems that solve real problems at the intersection of AI, reverse engineering, and practical technology. ## Skills AI Integration, Reverse Engineering, Python, NixOS / Linux, SvelteKit, 3D Printing, Web Scraping, Infrastructure ## Interests - AI Safety & Alignment -How we build AI that stays helpful and honest as it gets more capable. - Offensive Security -Reverse engineering, vulnerability research, and understanding how things break. - Open Source Hardware -3D printing, open firmware keyboards, DIY tech that you actually own. - Autonomous Systems -Building agents and workflows that run businesses end-to-end. ## Contact - Email: jasper@rexford.dev - GitHub: https://github.com/jasperrexford - LinkedIn: https://linkedin.com/in/jasperrexford - Website: https://rexford.dev ## Blog Posts ### AI Picked My Keyboard Then Got the Firmware and Compiled It Date: 2026-04-28 Tags: AI, Hardware, Linux, Open Source URL: https://rexford.dev/blog/ai-picked-my-keyboard So I needed a new keyboard. I've been typing 8+ hours a day and my wrists were starting to complain, so I knew I wanted something ergonomic and split. But I also run NixOS so driver support actually matters to me. I'm not going to buy something that needs proprietary software to configure. My requirements were pretty specific: - Split layout for ergonomics - Open-source firmware, QMK or similar, so I actually own it - Hot-swappable switches because I want to experiment - Under £150 That's a surprisingly small search space right. There are hundreds of mechanical keyboards out there but the overlap of split, open firmware, hot-swap and reasonable price is tiny. ## The research I described exactly what I wanted to Claude and asked it to research options. Not "pick me a keyboard" because that's way too vague. I gave it the exact constraints and asked for a shortlist with actual tradeoffs between them. It came back with three options and compared them on the things that actually matter. It also flagged stuff I hadn't even thought about, like which keyboards have active QMK community forks versus abandoned repos and which hot-swap sockets are compatible with the widest range of switches. That's the kind of detail you'd only find after hours of forum diving. The pick was the [Epomaker Split70](https://epomaker.com?sca_ref=11173479.3YZ7kVjt1s). QMK/VIA compatible, split 70% layout, south-facing LEDs, hot-swap Gateron sockets, gasket mount, rotary knob, tri-mode connectivity with Bluetooth 5.0 and 2.4GHz wireless. Came in under £120. Ordered it. ## What happened when it arrived This is where it gets interesting. The keyboard showed up, I plugged it into NixOS and it was recognised immediately because QMK keyboards present as standard HID devices. No drivers, no software, just works. But I wanted the actual firmware source code so I could build custom effects and really make it mine. Now here's the thing about QMK. It's licensed under [GPLv2](https://github.com/qmk/qmk_firmware/blob/master/license_GPLv2.md). That means any manufacturer shipping a product with QMK firmware is legally obligated to provide the complete source code on request. This isn't a suggestion, it's the licence. If you ship GPL code in a product, you must make the source available to anyone who asks. [QMK's own documentation](https://docs.qmk.fm/license_violations) is very clear about this and they actively track vendors who don't comply. So Claude, using the Protonmail MCP server through Proton Bridge, drafted and sent an email to Epomaker's support requesting the GPL source code for the Split70 firmware. It referenced the GPLv2 licence, cited the specific obligation and asked for the complete source for the as-shipped firmware. Professional and to the point. ## The back and forth Epomaker responded. They were happy to provide the source but needed to verify the purchase first. They asked for my Amazon order number. Claude got that notification and flagged it to me so I gave it the order ID. Claude sent it back and within a couple of days the firmware source code came through. The whole exchange was handled through the Protonmail MCP server. Claude drafted the emails, sent them from my address, monitored for replies and notified me when something needed my input. I only had to step in once to provide the order number. ## Compiling from source Once the source code arrived, Claude downloaded it, set up the QMK build environment and compiled the firmware targeting the Split70. It built cleanly. I flashed it onto the keyboard and it worked first time. Since then I've been customising it. I've built a YAML-to-C generator for custom RGB effects so I can define lighting patterns in a config file instead of writing C by hand every time. I've got a custom keymap dialled in. And because it's all open source I know exactly what's running on my hardware. ## The full pipeline Think about what just happened here. From a single conversation: 1. Claude researched keyboards against my specific constraints 2. Found the Epomaker Split70, compared tradeoffs, recommended it 3. After it arrived, it drafted a GPL compliance email to the manufacturer 4. Sent it via Protonmail automatically 5. Monitored for the reply 6. Notified me when they needed my order number 7. Sent that back 8. Downloaded the firmware source when it arrived 9. Compiled it from source 10. I flashed it That entire pipeline, from research to running custom firmware, was orchestrated by AI with me stepping in only when it actually needed a human decision. That's not AI replacing me. That's AI handling the parts I don't want to spend hours on so I can focus on the parts I actually enjoy, like designing custom lighting effects and fine-tuning my keymap. ## The keyboard itself Genuinely the best keyboard I've ever used. The split layout took about a week to adjust to but now my shoulders and wrists feel completely different after long sessions. The gasket mount gives it a really satisfying feel, the rotary knob is great for volume and having it running my own compiled firmware on NixOS with zero proprietary software anywhere in the stack is exactly what I wanted. If you're interested in the Split70, check it out on the [Epomaker site](https://epomaker.com?sca_ref=11173479.3YZ7kVjt1s). The [QMK firmware repo](https://github.com/qmk/qmk_firmware/tree/master/keyboards/epomaker) has support for several Epomaker boards and [Epomaker's GitHub](https://github.com/Epomaker) has some of their source code published directly. --- ### I Set Up a Racing Sim on Linux Instead of Paying for Driving Lessons Date: 2026-04-14 Tags: AI, Linux, Hardware, NixOS URL: https://rexford.dev/blog/sim-wheel-linux-with-ai So driving lessons in the UK are mad expensive right. We're talking £30-40 per hour and you need loads of them before you're even close to ready for a test. And I had this thought, what if I could build some muscle memory at home first? Not replace actual lessons but get the basics down so I'm not wasting paid hours learning where the clutch bites. I already had a NixOS machine sitting there. I just needed a wheel, pedals and something that would actually run on Linux. And this is where most people give up because sim racing is completely Windows-dominated. ## The hardware I went with a Logitech G923 Racing Wheel (the PlayStation/PC version). Full three-pedal set with gas, brake and clutch. The brake has this heavy progressive spring which actually feels realistic. I'm running it on my ASUS TUF laptop with an RTX 3050 and a Samsung 1440p monitor over HDMI. Not exactly a high-end sim rig but it does the job. For the game I went with City Car Driving on Steam. It's specifically designed for learning traffic rules, city driving and exam scenarios which is exactly what I needed. It's not a racing game, it's a driving practice tool. ## What Claude actually did This is the part that still blows my mind. I told Claude what I had and what I wanted to achieve. And it didn't give me a list of instructions to follow. It went into my NixOS config and did everything itself. First it installed the `new-lg4ff` kernel driver for force feedback support. Added it straight to my `boot.extraModulePackages` in my NixOS config and rebuilt the system. Force feedback was working immediately after that. Then it got City Car Driving running through Proton. It figured out that I needed GE-Proton (version 10-34) for proper Wayland support, set up the right launch options so the game renders on my external monitor instead of the laptop screen using `PROTON_ENABLE_WAYLAND=1 PROTON_WAYLAND_MONITOR=HDMI-A-1`, and configured NVIDIA offloading so the RTX 3050 handles the rendering instead of the integrated Intel GPU. But here's where it gets really impressive. There was a Wine bug where the G923's PlayStation variant has a different USB product ID (`0xC266`) that Wine's `bus_sdl.c` doesn't recognise properly. This caused the brake pedal axis to map incorrectly. Instead of showing up as `axis:rz` like it should, it was coming through as `axis:rx`. Claude traced this all the way down to the Wine source, figured out the mapping issue and then configured the in-game bindings to match the actual axis assignments. It also set up the full button and axis mapping for everything. Steering, gas, brake, clutch, indicators, headlights, engine start, gear shifts, handbrake. Every single control mapped and working. ## The technical details For anyone running a similar setup, here's what the final config looks like. The kernel module goes in your NixOS hardware config: `boot.extraModulePackages = [ config.boot.kernelPackages.new-lg4ff ];` The Steam launch options handle Wayland, monitor selection and GPU offloading: `PROTON_ENABLE_WAYLAND=1 PROTON_WAYLAND_MONITOR=HDMI-A-1 DXVK_ASYNC=1 __NV_PRIME_RENDER_OFFLOAD=1 __GLX_VENDOR_LIBRARY_NAME=nvidia %command%` No udev rules needed because the generic HID support plus Steam's input rules are enough. The key thing is getting the pedal axes right. Gas and brake both need `FullAxisMode=true` and `Inversion=true` in the game's profile database. And whatever you do, don't add a G923 alias XML file because it strips the axis modifiers on every launch and breaks everything. ## Does it actually save money? A driving instructor charges roughly £35 an hour. The wheel cost significantly less than that per hour of practice I've put into it. And I know the maths isn't exactly one to one right, you can't learn to check mirrors and deal with real traffic in a sim. But the spatial awareness, the pedal coordination and the steering input genuinely does transfer. City Car Driving specifically focuses on traffic rules and real driving scenarios which makes it way more useful than a racing game for this purpose. I'm building muscle memory for clutch control and road positioning in a zero-risk environment where I can just reset and try again. ## Why this matters The whole setup from plugging in the wheel to doing laps was handled by Claude in one session. It installed the kernel driver, configured Proton, traced a Wine bug with the pedal mapping, set up GPU offloading, configured the monitor output and mapped every single control. I didn't read a single forum post or edit a config file manually. That's what I mean when I say AI is a force multiplier. All the knowledge to set this up existed somewhere out there scattered across forum posts and kernel docs and Wine bug trackers. Claude took all of that, figured out what applied to my exact setup and just did it. And I spent my evening practising driving instead of debugging config files. Which is kind of the whole point. --- ### What Running Everything on Claude Code Actually Looks Like Date: 2026-03-30 Tags: AI, Business, Technical URL: https://rexford.dev/blog/building-agentic-business So I run my entire operation through Claude Code. And I don't mean I use it as a fancy autocomplete or ask it questions occasionally. I mean it's the primary interface for building, deploying and managing basically everything I do. It handles my infrastructure, my communication, my research and most of my development. I know that sounds like hype. So let me just walk through what my actual day looks like. ## The setup I've got three laptops and two servers, all running NixOS with a shared declarative config. That config manages everything. Packages, services, secrets, SSH keys, MCP servers, the lot. To set up a new machine is basically two commands and it's got the same environment as everything else. That's one of the things I really love about NixOS right, the reproducibility means I'm never worried about my agents messing up a system because I've always got a working config to fall back to. Claude Code is my main terminal tool and it's connected to MCP servers for Protonmail (email through Proton Bridge), WhatsApp, Google Drive and a bunch of custom services I've built. Each project has its own CLAUDE.md file that documents the infrastructure, deployment commands, credentials references and project context. So any instance of Claude that opens a project knows exactly what it's working with straight away. On any given day I'll have about five Claude Code instances running at the same time. I jump between them. Some are doing work stuff, others are handling business things like consultancy or my own projects. It's like having a team except it's just me and a bunch of agents. ## What it actually does day to day So for infrastructure, when I registered this website, rexford.dev, Claude Code hit the Cloudflare API directly with my token. It deleted all the default Namecheap DNS records, added the Protonmail MX, SPF, DKIM and DMARC records, created a Cloudflare Pages project, built a SvelteKit site, deployed it and wired up the custom domain. I didn't touch the Cloudflare dashboard once. The whole thing took maybe 30 minutes. For development, I describe what I want and it writes it, builds it, deploys it. And I don't mean in a vague "AI writes code" way. I mean in a "here's the production URL, it's live" way. This site you're reading was built and deployed in a single session. The design system, the blog engine, the contact form, the SEO. All of it. Communication is a big one. Through the MCP servers Claude Code can read and send emails via Protonmail using any of my custom domain addresses, handle WhatsApp messages with the right tone for each context and manage files in Google Drive. Each channel has specific rules so the tone always matches what's expected. WhatsApp is professional and neutral. Emails match the brand they're sent from. I've even got it connected to my Starling Bank account. It can look at my spending patterns, flag stuff I might not have noticed and help me improve my financial habits. So it's not just handling work and business, it's genuinely helping me manage my life better. Having that kind of visibility across everything in one place is something I don't think enough people are doing with AI yet. And research. When I was looking into the Anthropic Fellows Programme, Claude Code searched the web, fetched the actual job listings from Greenhouse, pulled out every requirement and gave me an honest assessment of my chances based on my actual profile. Not flattery. It told me straight that my chances were low and explained exactly why. That kind of honesty is genuinely useful. ## Why NixOS makes this work so well So the reason all of this comes together is because of how NixOS and Claude Code interact. And not many people are talking about this right. Because everything in NixOS is declarative, when Claude opens a project on any of my machines it can immediately look at the config and understand what system it's on, what tools are installed, what services are running and how everything fits together. It's like giving the agent a complete map of the environment before it even starts. And that means I can let agents make real changes to my infrastructure without being scared they'll break something. Because worst case I've always got a working config to rebuild from. I can literally go for a walk, spend some time with family, come back and the agents have set up a new service. My infrastructure doesn't become a chore, it becomes an asset. I've also got what I call agent-ops which is basically a playbook that documents every setup pattern I've used. So when I spin up a new project it follows the exact same structure. New domain, DNS, email, website, deployment. It's all repeatable. Each project gets its own CLAUDE.md and each CLAUDE.md builds on what I've learned from the last one. So over time the whole system gets smarter. ## Being honest about the limits I want to be real about this because there's so much hype around AI online and most of it is rubbish. You still need to know what you want to build. Claude executes really well but it doesn't come up with the ideas. That's still on me. What to build, who it's for, why it matters. That's human work and it always will be. And when something completely new comes up that it hasn't seen patterns for before it can go in circles a bit. That's when I step in and use my own judgement. I also still check what it does right. I give it access to API tokens and deployment commands but I'm not just handing everything over and walking away. I review the changes. I make sure nothing's been done that shouldn't have been. ## What I actually think about all this Look this isn't about AI replacing developers. I'm a developer who happens to use AI as my primary interface. The code still needs to be correct. The architecture still needs to make sense. The decisions are still mine. But all the mechanical stuff, the boilerplate, the API calls, the config files, the deployment pipelines. That's handled. And that frees up my head for the parts I actually care about. The designing, the innovating, building things that haven't been built before. And honestly I've never been more productive and more relaxed at the same time. That sounds like a contradiction but it's not. When the procedural stuff is taken care of you stop stressing about it. You stop context switching between the creative work you want to do and the maintenance work you have to do. It just gets done in the background while you focus on what matters. That's what I care about. AI is a multiplier and if we don't constantly work on our base then we'll be static and people are going to overtake us. So I use AI to handle the procedural stuff and I focus on pushing the creative and strategic side. The ideas have to come from somewhere and right now that somewhere is still us. --- ### My Privacy and Security Setup on NixOS Date: 2026-03-12 Tags: Privacy, Security, NixOS, Linux URL: https://rexford.dev/blog/privacy-security-setup I take privacy and security pretty seriously. Not in a tinfoil hat way but in a "I want to know exactly what's leaving my machine and where it's going" way. And running NixOS makes this significantly easier than it would be on any other distro because I can declare my entire security posture in config files that I version control. Let me walk through what my actual setup looks like. ## OpenSnitch - application firewall This is probably the most important piece. I run OpenSnitch as an application-level firewall with the default action set to deny. That means every single process that tries to make an outbound connection gets blocked unless I've explicitly allowed it. It uses eBPF for process monitoring and nftables for the actual filtering. I've got rules for the stuff that obviously needs network access. Firefox, Claude Code, git, the nix daemon, Protonmail Bridge, Syncthing, Spotify. But here's the thing, curl is intentionally NOT whitelisted. It's the number one tool used by supply chain attacks to download payloads so every curl request that happens on my system prompts me through OpenSnitch so I can see exactly where it's going. If some random npm package tries to phone home I'll know about it. I've also got a hard block rule on Discord domains. Not because I have anything against Discord but because I don't want anything on my system talking to their servers without me knowing. ## Mullvad VPN Mullvad is my VPN and I've got some specific hardening around it. When Mullvad is enabled on a machine my NixOS config automatically disables IPv6 system-wide. The reason is that Mullvad's tunnel has IPv6 turned off so any IPv6 traffic would bypass the tunnel and leak my real address. The config also forces Docker containers to use the host's systemd-resolved stub instead of falling back to Google DNS during Mullvad transitions. I've built custom Go tools for managing Mullvad too. A TUI for quick relay switching and a relay selector. Both compiled as Go modules in my nix config. I also use SOCKS proxies through Mullvad for routing specific applications through different exit nodes which gives me more granular control over what traffic goes where. I've also got Tailscale set up with Mullvad as my exit node. So all my devices including my phone are connected to my infrastructure through Tailscale but the traffic exits through Mullvad. That means I can access any of my servers from anywhere, even from my phone on mobile data, and I've still got the privacy of a VPN. It's the best of both worlds right. Private mesh networking for my fleet and encrypted tunnelling for everything going out to the internet. ## Firejail sandboxing Every application that doesn't need full system access runs in a firejail sandbox. I've got custom profiles for Spotify, Telegram and Steam. Each profile explicitly blacklists sensitive directories like `.ssh`, `.gnupg`, `.mozilla` and my nix-config. They whitelist only what the app actually needs and drop all capabilities, disable printers and USB, restrict namespaces and filter D-Bus access. The Spotify profile for example can only see its own cache and config directories plus the nix store. It can't see my home directory, it can't see my SSH keys and it can't see any of my projects. If Spotify gets compromised the blast radius is basically zero. ## Secrets management with agenix All my secrets are managed through agenix. API keys, tokens, passwords, they're all encrypted with age and only decrypted at build time on the machines that need them. They never sit on disk in plaintext outside of the nix store. I've even built a custom MCP secrets server that sits between Claude Code and my secrets. It runs as a dedicated system user, reads secrets from agenix and verifies callers by walking the process tree and matching against the exact Claude Code store path. A compromised process running as my user can't read the secrets because they're owned by a different user and it can't impersonate Claude because of the exact store path pinning. It's a proper defence-in-depth approach. ## GnuPG GPG is enabled with SSH support through the agent so I can use my GPG keys for both signing and SSH authentication. Kleopatra is installed for key management. ## Why NixOS makes this work The thing about all of this is that it's declarative. My entire security setup is in version-controlled nix files. If I set up a new machine it gets the same OpenSnitch rules, the same Mullvad hardening, the same firejail profiles, the same pentesting tools. I don't have to remember what I installed or configured. It's just there. And because it's all code I can review changes, roll back if something breaks and know exactly what's running on every machine in my fleet. That's the kind of confidence you don't get from manually installing security tools and hoping you didn't forget anything. --- ### Why NixOS and AI Agents Work So Well Together Date: 2026-02-18 Tags: NixOS, AI, Infrastructure URL: https://rexford.dev/blog/nixos-and-ai-agents Something I don't see many people talking about is how well NixOS works with AI agents. And I don't just mean "it runs on Linux so Claude can use the terminal". I mean there's something fundamentally different about giving an agent access to a declarative system versus a traditional mutable one. ## Agents can read the whole infrastructure When Claude opens a project on any of my machines the first thing it can do is look at my NixOS config. And because everything is declared in nix files it can immediately understand what system it's on, what packages are installed, what services are running, what secrets are available and how all the machines in my fleet relate to each other. On a traditional distro the agent would have to run a bunch of commands to figure out the state of the system. What's installed? What version? Is this service running? What config files exist? With NixOS all of that is in one place in a format that's easy to parse. The agent gets a complete map before it even starts working. ## I never worry about unrecoverable states This is the big one right. On Arch or Ubuntu or whatever, if an agent makes a bad change it can brick your system. Wrong kernel module, broken init config, corrupted package, whatever. You're looking at recovery media and hoping you have backups. With NixOS I literally don't have that fear. Every generation is stored and I can roll back to any previous working state at boot time. If an agent makes a change that breaks something I just reboot into the last working generation and the broken change is gone. The system is always recoverable. Always. That means I can let agents make real infrastructure changes without hovering over them. I can go for a walk, spend time with family and come back knowing that even if something went wrong I'm two button presses away from a working system again. For business critical infrastructure that confidence is everything. ## Access to the entire nixpkgs ecosystem This is genuinely a game changer. Nixpkgs is one of the largest package repositories in existence. Over 100,000 packages all available declaratively. When an agent needs a tool it can just add it to the config and rebuild. No hunting for PPAs, no compiling from source, no dependency conflicts. I've had Claude set up entire server stacks just by adding a few lines to my nix config. Jellyfin, Kavita, Kiwix, Forgejo, Supabase, n8n, all running as declarative services. Each one takes a few lines of nix and comes with sensible defaults. The agent doesn't need to know how to install and configure each service from scratch, it just needs to know how to write nix. ## The friction point - too many agents building at once I want to be honest about something that actually caused me problems. When you've got multiple Claude Code instances running on the same machine and they're all making nix config changes, they can start conflicting with each other. One agent is doing a `nixos-rebuild switch` while another one is trying to build a different change. Nix doesn't handle concurrent builds on the same system gracefully and you end up with lock conflicts and failed builds. I had to solve this by building a custom coordination system. Basically a NixOS agent manager that agents check in with before they make system changes. It tracks who's currently building, queues changes if someone else is mid-rebuild and makes sure agents are aware of what other agents have changed since their last build. It's not complicated but it's the kind of thing you only figure out after your fifth concurrent build failure at 2am. ## Reproducibility across the fleet I've got three laptops and two servers all sharing the same nix config. Each machine has its own host-specific configuration but the base system, the security setup, the development tools, the services, they're all defined once and shared. When I add a new tool or change a configuration it applies everywhere on the next rebuild. That means every agent on every machine is working in a consistent environment. There's no "works on my laptop but not the server" problem. The config is the source of truth and every machine converges to the same state. ## What this means for the future I think NixOS is going to become the default choice for AI-heavy infrastructure. The combination of declarative configuration, atomic rollbacks, reproducibility and the massive package ecosystem makes it the ideal operating system for letting agents manage your systems. You get all the power of giving agents real system access with none of the "what if it breaks everything" anxiety. And here's the thing, you don't even need to be running NixOS to start using nix. Nix flakes work on macOS, Ubuntu, Fedora, Arch, basically any distro. You can install the nix package manager on whatever you're already running and start using flakes to manage your development environments, your project dependencies and your tooling. You get the reproducibility without having to switch your whole operating system. So if you're curious about this but the idea of switching to NixOS feels too big of a jump, just install nix on your current system and start with a flake for one project. Once you see how it works you'll understand why I went all in. And if you are running AI agents on your infrastructure and you're not using nix yet I'd seriously consider it. The initial learning curve is steep but once you're past it the confidence you get from knowing your system is always recoverable and always reproducible is worth it. Especially when you've got five Claude Code instances making changes at the same time.