OpenClaw 4.14 brings critical fixes if you're using GPT-5.4 models. Before this update, your requests were failing with validation errors when thinking mode was set to minimal because OpenAI's API doesn't accept that value. OpenClaw was also hanging or failing silently when GPT-5 returned only reasoning tokens with no visible answer. Now, OpenClaw automatically maps minimal to low (which OpenAI accepts), and implements a bounded retry system for empty responses. While not 100% fixed, you'll at least get graceful fallbacks instead of silent failures. The update also includes major core refactoring for performance. Background housekeeping tasks now run during idle time instead of blocking your prompts, making agents respond faster. Memory handling got a security upgrade—recalled memory moved from the system prompt to an untrusted prefix path, preventing injected instructions from being treated as authoritative. If you're running scheduled agents or working in teams, you'll benefit from isolated session contexts that prevent context bleeding between conversations and fix permission mix-ups when multiple team members message the same agent simultaneously. The QMD memory backend was spinning up phantom memory collections due to a legacy file bug, causing unnecessary searches and hallucinations. The dreaming feature also got fixed—it was re-triggering duplicate memory consolidation on every heartbeat after scheduled runs. One warning: there's an open bug with the Telegram plugin causing restart loops, so hold off updating if you're using OpenClaw on Telegram until the team gives the all-clear on GitHub.





