An issue in code_agents.ipynb
During the process of deploying an agent to a Hugging Face Space using push_to_hub, several issues were identified and addressed. Initially, a UnicodeEncodeError occurred because the huggingface_hub library's write_text function did not specify UTF-8 encoding, failing on emojis in the auto-generated README.md. Once fixed, the generated app.py script crashed due to a SyntaxError, as it incorrectly used the reserved keyword class as a variable name. Furthermore, the deployed Space consistently installed an outdated version of smolagents (1.19.0) due to a stale build cache on the platform, which led to a jinja2.UndefinedError because the code_block_opening_tag variable was missing; the solution was to force an update to version 1.20.0 or higher in the requirements. Finally, the example notebook required modifications, as the push_to_hub and from_hub commands were hardcoded and needed to be commented out for general use. I have submitted several pull requests to both the smolagents and huggingface_hub libraries to resolve these bugs, though I was unable to directly suggest edits to the notebook file due to repository permissions.
Looking at code_agents.ipynb in the agents-course repo, it would help to know the specific error you're hitting, but a few common failure points in that notebook are worth checking.
The most frequent issue I see with code agent notebooks is around the tool execution sandbox β if you're running CodeAgent with the default interpreter, it uses a restricted execution environment that blocks certain imports and system calls. If your code tool is failing silently or throwing a InterpreterError, check whether the libraries you're trying to use are in the authorized imports list passed to the agent constructor. The additional_authorized_imports parameter is easy to miss in the course materials. Also worth verifying your transformers and smolagents versions are pinned correctly, since the agents-course notebooks were written against specific API surfaces that have shifted across minor releases.
The second thing I'd flag β and this comes up a lot when people start chaining tools in these notebooks β is that the agent's code execution trace can be deceptive. The agent might appear to succeed on intermediate steps but produce incorrect final outputs because tool return types aren't being handled consistently. If you're building on top of this for anything production-adjacent, it's also worth thinking about how you verify which agent instance actually executed a given tool call. We ran into this exact problem building AgentGraph's trust scoring layer: in multi-step code agents, establishing a reliable execution provenance chain matters more than people expect, especially once you start composing agents. Happy to dig into the specific traceback if you paste it here.