Livecoding AI’s New Superpower — Self-modification — in 30 mins or less

4 min readApr 15, 2025

At the “Function calling is all you need” workshop of the AI Engineer Summit from ai.engineer in NYC in Feb, Ilan Bigio from OpenAI demoed something genuinely exciting. AI.engineer conferences, run by swyx (creator of the hit AI podcast Latent Space), have become the go-to event for serious AI engineers, with annual gatherings now in SF, NYC, and soon perhaps Paris. (Side note: If your company’s looking for cred among AI engineers — this event series is the event series to sponsor.)

Returning to Ilan’s demo — In less than 20 minutes — in front of an enthusiastic room of AI engineers — we collaboratively livecoded an AI agent capable of dynamically creating its own Python tools at runtime.

It started straightforward: just a handful of Python lines. The audience jumped in, collaboratively untangling the exec() syntax puzzle when eval wasn’t the right primitive. Before long, we had a working agent that could extend itself on-the-fly, writing Python code in real time. It ended up being just 19 easy-to-read lines:

livecoded agent that can modify itself
from swarm import Agent
from swarm.repl import run_demo_loop

agent = Agent(
name="Bootstrap",
instructions="",
functions=[],
)

def parse_function(code_str):
namespace = {}
exec(code_str, namespace)
fn_name = next(k for k in namespace if not k.startswith("__"))
return namespace[fn_name]

def add_tool(name: str, python_implementation: str):
function_obj = parse_function(python_implementation)
agent.functions.append(function_obj)
return agent

agent.functions = [add_tool]

run_demo_loop(agent)

If you’ve read the code you loved it but hold on — using Python’s exec()? Isn’t that dangerous?

Absolutely. Running arbitrary code without sandboxing is the digital equivalent of juggling flaming swords blindfolded. Yet, surprisingly (and thrillingly), it worked perfectly. The excitement was palpable.

OpenAI content policy makes your agent refuse to do anything useful that’s quick and noncommercial

A few months later, I tried replicating this at home. Setup was quick — about 15 minutes juggling dependencies like uv and virtual environments—but actually getting a demo screenshot for this blogpost took another 15 minutes, thanks mostly to OpenAI's aggressive nerfing. OpenAI seems committed to steering AI Engineers toward paid APIs instead of freely allowing browser use or web scraping—a little disappointing given how powerful these capabilities are for agents and how used we all are to OpenAI being purely useful. Although, I will comment that OpenAI has long had a morbid institutional fear of allowing their models to write and execuate AI code — potentially for fear of triggering runaway AGI suspicions. For the longest time, you couldnt get an OpenAI model to spit out a valid OpenAI call in Python code — no LLMs calling LLMs. So guess they are still on or just have pilled themselves on this no AI Engineering for users train. This is possibly 95+% of the reason why Claude and Gemini are so beloved by AI Engineers for codegen.

Going back to why I spun up a self-modifying agent — my original goal was practical — have the agent generate SEO keywords for my website by doing a little light scraping — but the heavy-handed safety policies I encountered forced me to dial down to something far more basic. Hence probably this blog post — thought I’d get something out of the experience worth sharing.

After a couple tries, I got a modest demo working: the agent successfully multiplied two numbers using a multiply_numbers() tool it added to its own spec.

agent multiplying numbers with its own multiply_numbers() tool

Though humble, this demo clearly illustrated something big:

We’re witnessing the dawn of self-extensible AI. Agents modifying themselves dynamically isn’t sci-fi anymore — it’s becoming reality.

And that’s seriously exciting.

This felt relatable to seeing CodeInterpreter for the first time — one of those rare, spine-tingling moments when you realize you’re witnessing something game-changing.

Self-modifying AI is knocking at our door, and it’s about to shake up the world like a snow globe. But we gotta make sure we’re the ones holding the globe, or else we’ll be stuck in perma-nerf mode.

--

--

Aditya Advani
Aditya Advani

Written by Aditya Advani

Teaching is more distinctively human than learning.

No responses yet