Your .gitignore Won't Protect You From AI Agents

We often assume that adding files to .gitignore or .geminiignore is enough to keep them private. When it comes to local AI agents, that assumption is dangerously wrong. These ignore files are for version control and file search indexing, not a security shield. An AI assistant with access to your local environment can easily read any file, regardless of your ignore settings. A Simple, Scary Test Let’s prove it. Imagine you have a project with a simple .env file: ...

December 30, 2025 · 3 min · Joor0x

Custom Claude Code Notifications on Linux

If you use Anthropic’s Claude Code CLI, you know the struggle: you run a complex prompt or a long refactoring task, switch to another task, and forget to check back for five minutes. I recently came across Andrea Grandi’s post on how to solve this on macOS using terminal-notifier. Linux has a native equivalent that works perfectly. So… here’s how to set up desktop notifications for Claude Code on Linux. The Linux Alternative: notify-send On macOS, Andrea used terminal-notifier. On Linux, the standard tool for sending desktop notifications is notify-send, which is part of the libnotify library. Installed in my lubuntu 24.04 by default. But, First, ensure you have it installed. ...

December 6, 2025 · 3 min · Joor0x

Selecting an Open-Source DB for Financial Time Series

Choosing Your Data Engine: More Than Just Code When your algorithms depend on processing high-frequency data streams, or when you’re building ML models that need fast access to vast historical context, the time series database isn’t just a component – it’s the bedrock of your operation. A bottleneck here means missed opportunities, flawed analysis, or outright system failure. I’ve spent time evaluating the options because getting this wrong has consequences, especially when real capital or critical infrastructure is on the line. ...

April 6, 2025 · 8 min · Josep Oriol Carné

Three Proven Techniques to Reduce LLM Hallucinations

Large Language Models are superuseful right, but they have a well-known weakness: hallucinations. These are confident-sounding responses that are factually incorrect or completely fabricated. While no technique eliminates hallucinations entirely, these three strategies significantly reduce their occurrence in production systems. 1. Provide an Escape Hatch One of the most effective ways to reduce hallucinations is giving the model permission to admit uncertainty. LLMs are trained to be helpful, which sometimes leads them to generate plausible-sounding answers even when they lack sufficient information. ...

January 27, 2025 · 3 min · Joor0x

Critical Thinking in the Age of AI: Why Your Brain Is Your Last Competitive Advantage

The Calculator Effect, Amplified Remember when calculators became ubiquitous? Teachers worried we’d forget how to do mental math. They were right—but it didn’t matter much. The trade-off was acceptable. Generative AI is making the same bargain, except this time we’re not trading away arithmetic. We’re trading away thinking itself. A Swiss Business School study surveyed over 600 participants and found something uncomfortable: there’s a significant negative correlation between frequent AI use and critical thinking ability. The more we lean on AI, the less sharp our own reasoning becomes. This isn’t speculation—it’s showing up in the data. ...

January 2, 2025 · 9 min · Joor0x