Aisuite: One Client, Any Model

When I’m in AI Studio mode (rapid prototyping, lots of experiments), I want to spend my time on prompts, evals and breaking the code — not on re-learning yet another SDK. That’s why I use aisuite for most of my AI projects: One client API across providers Switching models is usually just changing a string like openai:gpt-5-mini → minimax:MiniMax-M2.1-lightning or even local ollama It stays close to the OpenAI-style shape, so it’s easy to adopt Lately I use quite a bit Minimax for coding tasks because it hits a great ratio of price vs quality....

February 9, 2026 · 1 min · Joor0x

Your .gitignore Won't Protect You From AI Agents

We often assume that adding files to .gitignore or .geminiignore is enough to keep them private. When it comes to local AI agents, that assumption is dangerously wrong. These ignore files are for version control and file search indexing, not a security shield. An AI assistant with access to your local environment can easily read any file, regardless of your ignore settings. A Simple, Scary Test Let’s prove it. Imagine you have a project with a simple ....

December 30, 2025 · 3 min · Joor0x

Custom Claude Code Notifications on Linux

If you use Anthropic’s Claude Code CLI, you know the struggle: you run a complex prompt or a long refactoring task, switch to another task, and forget to check back for five minutes. I recently came across Andrea Grandi’s post on how to solve this on macOS using terminal-notifier. Linux has a native equivalent that works perfectly. So… here’s how to set up desktop notifications for Claude Code on Linux....

December 6, 2025 · 3 min · Joor0x

Selecting an Open-Source DB for Financial Time Series

Choosing Your Data Engine: More Than Just Code When your algorithms depend on processing high-frequency data streams, or when you’re building ML models that need fast access to vast historical context, the time series database isn’t just a component – it’s the bedrock of your operation. A bottleneck here means missed opportunities, flawed analysis, or outright system failure. I’ve spent time evaluating the options because getting this wrong has consequences, especially when real capital or critical infrastructure is on the line....

April 6, 2025 · 8 min · Josep Oriol Carné

Three Proven Techniques to Reduce LLM Hallucinations

Large Language Models are superuseful right, but they have a well-known weakness: hallucinations. These are confident-sounding responses that are factually incorrect or completely fabricated. While no technique eliminates hallucinations entirely, these three strategies significantly reduce their occurrence in production systems. 1. Provide an Escape Hatch One of the most effective ways to reduce hallucinations is giving the model permission to admit uncertainty. LLMs are trained to be helpful, which sometimes leads them to generate plausible-sounding answers even when they lack sufficient information....

January 27, 2025 · 3 min · Joor0x

Critical Thinking in the Age of AI: Why Your Brain Is Your Last Competitive Advantage

The Calculator Effect, Amplified Remember when calculators became ubiquitous? Teachers worried we’d forget how to do mental math. They were right—but it didn’t matter much. The trade-off was acceptable. Generative AI is making the same bargain, except this time we’re not trading away arithmetic. We’re trading away thinking itself. A Swiss Business School study surveyed over 600 participants and found something uncomfortable: there’s a significant negative correlation between frequent AI use and critical thinking ability....

January 2, 2025 · 9 min · Joor0x