I Built an AI Assistant That Actually Remembers Things

10 Feb 2026

I Built an AI Assistant That Actually Remembers Things

It's 2am in Bali. I should be asleep, but instead I'm staring at SSH connection attempts scrolling past in my terminal. My DigitalOcean droplet which I'd named "Sofia" for no particular reason, had just exposed its admin dashboard to the entire internet for about six hours.

"This was supposed to be simple," I muttered, reaching for my third coffee of the night.

Three days earlier, I'd discovered OpenClaw. The promise was straightforward: a self-hosted AI agent with actual persistent memory, Notion integration, and enough flexibility to automate repetitive tasks. No cloud dependencies, no vendor lock-in, just a lightweight agent running on my own infrastructure.

The reality turned out to be a three-day journey through security hardening, cron job debugging, and more "wait, why isn't this working?" moments than I'd like to admit. But I'm still glad I did it.


The Automation I Actually Wanted

Here's the thing about being a developer with 10+ years under your belt: you know what good automation looks like. You've built scrapers before, you understand APIs, you've set up cron jobs. But there's a difference between knowing how to do something and actually doing it.

I deactivated Instagram about six months ago. Too much noise, too much scrolling, and I was tired of the algorithm deciding what I should see. But there was one real cost to that decision: I kept missing events.

Sore

I'm in Bali, and the live music scene here is surprisingly good. Indie bands from Jakarta, international acts passing through on regional tours, art installations in Ubud, underground electronic shows in Canggu. The kind of stuff I actually want to go to.

Problem is, almost all of these events are announced exclusively on Instagram. Venue stories. Artist posts. Random event pages. By the time someone tells me about a show, tickets are sold out or I've already made other plans.

Chiharu Shiota: The Soul Trembles

I'm a developer. I automate things for a living. I've built scrapers that handled more complex data than event listings. But every week I'd still ask friends: "Any good shows happening?" Like it's 2010.

The other thing eating my time was job hunting. I'm currently open to remote work, freelance, contract, part-time and the APAC timezone means most positions get posted while I'm asleep. I'd spend 60-90 minutes every morning scanning job boards, filtering through hundreds of listings. By noon I'd realize I hadn't written a single line of code.

I know exactly how to build these scrapers. Tech stack filtering, location requirements, relevance scoring. I could spin up a Python script in an afternoon. But then I'd have to maintain it. Job boards change their HTML every few months. Event sites add Cloudflare protection. What starts as a weekend project turns into a maintenance burden.

And then there was the memory problem. I'd be three days deep into a side project, making technical decisions. Then a freelance gig would come up, I'd context-switch for a week, come back and spend 30 minutes trying to remember: why did I choose this approach?

So when I heard about OpenClaw, an AI agent that could handle both the automation and the memory. I knew I had to try it. Not because I didn't know how to build these things separately, but because I was tired of building them separately.

Why I Chose OpenClaw Over Building It Myself

I've built scrapers before. The problem wasn't technical capability. It was maintenance.

OpenClaw offered something different: a framework where other developers were already solving these problems. Skills on ClawHub that I could install instead of building from scratch. A community dealing with the same scraping challenges and sharing solutions.

Plus, the memory system. I could build a note-taking system, sure. But would I actually maintain it? OpenClaw's persistent memory wasn't perfect, but it was there, working, battle-tested by other developers.

The pitch was simple: stop reinventing wheels, start using wheels other people are actively maintaining.

Setting Up Sofia (My Development Server)

I chose DigitalOcean's Singapore datacenter closest to Bali with solid performance. The specs:

Initial setup was smooth. Ubuntu 24.04 LTS, install Node.js, install OpenClaw via npm. Twenty minutes and I had an agent running.

That's when it hit me: this dashboard was accessible to anyone who knew my server's IP. No password. No VPN. Just... open.

Three Hours of "Oh God, What Have I Done"

Look, I know better. I've set up production servers. I've done security audits. But there's something about a side project at 11pm that makes you skip steps you'd never skip at work.

The dashboard had been exposed for about six hours. Long enough that I kept thinking: what if someone found it?

I spent the next three hours locking everything down:

The lesson stuck: security isn't something you add later. You do it first, even when you're tired and just want to see the thing work.

Building the Automations (Finally)

With the server secured, I could focus on what I actually wanted: automation that worked while I slept.

Job Scraper

I built a scraper that runs daily at 5am (before I wake up) and filters jobs by tech stack (React, Node JS, TypeScript, Wordpress, Python), location (APAC-friendly or worldwide), and contract type (freelance, contract, part-time).

The script scrapes multiple sources, deduplicates by URL, scores each job for relevance (0-100), and sends the top 3-5 to my Telegram.

First run found 47 jobs and filtered down to 3 genuinely relevant ones. That's 90 minutes of manual work automated.

Bali Events Scraper

This one was trickier. Event information in Bali is scattered across venue websites, Facebook events, Instagram posts, WhatsApp groups, and random aggregators.

I couldn't scrape Instagram directly (they're aggressive about blocking bots), so I ended up building a hybrid approach:

Zedd, Savaya Bali

Every Sunday morning, I get a digest: "Music & Art Events This Week in Bali." It's not perfect. I still miss Instagram-only events, but it catches 60-70% of what I care about. Way better than randomly asking friends.

The Memory System (Which Turned Out More Useful Than Expected)

OpenClaw stores conversations as markdown files organized by date. It sounds almost too simple to work well, but that simplicity is exactly why it works.

I'll be working on a project and make a decision: "I'm using PostgreSQL instead of MongoDB because this needs relational integrity." I tell the agent. It logs it.

A week later, I'm back on the project after a freelance gig, and I ask: "Why did I pick PostgreSQL for this again?" The agent pulls up the conversation from that specific day, the exact reasoning.

The shift took a few days to trust. Eventually you stop trying to hold everything in your head because you know the external memory is reliable

Notion Sync (The Feature That Almost Broke Everything)

I wanted scraped results to flow automatically into Notion. The Notion API is straightforward. You send blocks, you get a page. Simple.

Except for rate limiting. Three requests per second, hard limit. Creating a page with 300+ blocks? 100+ seconds. Add error handling and retries? Ten minutes per sync.

What I ended up with was a server that would spike to 95% CPU, sometimes trigger overlapping processes, and occasionally just... silently fail.

The solution was humbling: I stopped trying to automate it entirely. Now I just ask the agent: "Copy these three jobs to Notion." It does it. Takes 30 seconds. No automation, but it works reliably.

Sometimes the right amount of automation is less than you think.

What Actually Stuck

The security panic taught me to never skip hardening, even for side projects. I had exposed that dashboard for six hours. Nothing bad happened, but it could have. That won't happen again.

Automation changes your behavior in unexpected ways. I trust the job scraper enough that I stopped manually checking boards entirely. I check the events digest every Sunday. That's both liberating and slightly scary.

The memory system is weird but it works. Now I default to "tell the agent" instead of "try to remember this." That cognitive shift is significant.


If Someone Asked Me "Should I Do This?"

Here's what I'd actually tell them over coffee:

If you're comfortable spinning up a VPS and have some specific thing eating 5+ hours of your week, yeah, try it. The setup is a weekend. The maintenance is low once it's running.

If you're manually checking job boards, event sites, or any repetitive information source, this will save you time. If you're juggling multiple projects and losing context, the memory system will help.

The real question isn't "is this good?" It's "do you have a problem this solves?" For me, it was missing events and wasting time on manual research. What's yours?