In the rapidly evolving landscape of AI coding agents, the transport layer has emerged as a critical factor in determining performance and efficiency. This article delves into the significance of stateful continuation for AI agents, particularly in the context of agentic coding workflows. By examining the 'Airplane Problem' and the 'Agentic Coding Loop', we explore how the choice of transport layer can significantly impact the speed and efficiency of these workflows. The analysis reveals that stateless APIs, while widely used, scale poorly with context, leading to linear payload growth and increased latency. In contrast, stateful continuation, exemplified by OpenAI's WebSocket mode, dramatically reduces overhead by caching context server-side, resulting in up to 80% less client-sent data and 15-29% faster execution time. This architectural advantage is not protocol-specific but rather a consequence of avoiding the retransmission of context. However, the article also highlights the trade-offs, such as challenges in reliability, observability, and portability, that must be carefully weighed. The discussion extends to the broader implications of server-side state management, the statefulness spectrum, and the impact on parallel execution. Finally, the article concludes by emphasizing the importance of recognizing the evolving role of the transport layer in AI agent design, urging architects to consider the potential benefits of stateful continuation in their system designs.