<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Zhihong Liu | RoPL</title><link>http://www.ropl.ai/author/zhihong-liu/</link><atom:link href="http://www.ropl.ai/author/zhihong-liu/index.xml" rel="self" type="application/rss+xml"/><description>Zhihong Liu</description><generator>Hugo Blox Builder (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Sun, 01 Feb 2026 00:00:00 +0000</lastBuildDate><item><title>Any House Any Task (AHAT) is Now Public</title><link>http://www.ropl.ai/post/26-02-01-ahat/</link><pubDate>Sun, 01 Feb 2026 00:00:00 +0000</pubDate><guid>http://www.ropl.ai/post/26-02-01-ahat/</guid><description>&lt;p>🚀 Exciting News! We are thrilled to announce the release of our latest work:“Any House Any Task: Scalable Long-Horizon Planning for Abstract Human Tasks.” This research tackles long-horizon planning in large environments given ambiguous human instructions.&lt;/p>
&lt;h2 id="paper-details">Paper Details&lt;/h2>
&lt;p>&lt;strong>Title:&lt;/strong> Any House Any Task: Scalable Long-Horizon Planning for Abstract Human Tasks&lt;/p>
&lt;p>&lt;strong>Authors:&lt;/strong> Zhihong Liu, Yang Li, Renming Huang, Cewu Lu, Panpan Cai&lt;/p>
&lt;p>&lt;strong>Abstract:&lt;/strong> Open world language conditioned task planning is crucial for robots operating in large-scale household environments. While many recent works attempt to address this problem using Large Language Models (LLMs) via prompting or training, a key challenge remains scalability. Performance often degrades rapidly with increasing environment size, plan length, instruction ambiguity, and constraint complexity. In this work, we propose Any House Any Task (AHAT), a household task planner optimized for long-horizon planning in large environments given ambiguous human instructions. At its core, AHAT utilizes an LLM trained to map task instructions and textual scene graphs into grounded subgoals defined in the Planning Domain Definition Language (PDDL). These subgoals are subsequently solved to generate feasible and optimal long-horizon plans through explicit symbolic reasoning. To enhance the model&amp;rsquo;s ability to decompose complex and ambiguous intentions, we introduce TGPO, a novel reinforcement learning algorithm that integrates external correction of intermediate reasoning traces into Group Relative Policy Optimization (GRPO). Experiments demonstrate that AHAT achieves significant performance gains over state-of-the-art prompting, planning, and learning methods, particularly in human-style household tasks characterized by brief instructions but requiring complex execution plans.&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Paper&lt;/strong>: &lt;a href="https://arxiv.org/abs/2602.12244" target="_blank" rel="noopener">Available on arXiv&lt;/a>&lt;/li>
&lt;li>&lt;strong>Project Page&lt;/strong>: &lt;a href="https://sii-liyang2024.github.io/ahat/" target="_blank" rel="noopener">AHAT Project Website&lt;/a>&lt;/li>
&lt;/ul>
&lt;p>We are excited to share this work with the robotics community and look forward to your feedback and potential collaborations!&lt;/p></description></item><item><title>Any House Any Task: Scalable Long-Horizon Planning for Abstract Human Tasks</title><link>http://www.ropl.ai/publication/liu-2026-ahat/</link><pubDate>Sun, 01 Feb 2026 00:00:00 +0000</pubDate><guid>http://www.ropl.ai/publication/liu-2026-ahat/</guid><description/></item></channel></rss>