LLMs like Anthropic’s Claude are good at lots of things, but executing code isn’t one of them. Fortunately, models like Claude 3 understand this and allow you to supplement their functionality with external “tools”.

In this guide we’ll use Python to hook up the Riza Code Interpreter API as a tool for Claude to use when it wants or needs to execute Python. Using Riza as the code execution environment keeps your local machine safe in case the model does something unexpected.

This guide assumes you have Riza and Anthropic API access.

First we import and initialize required libraries from Anthropic and Riza.

import anthropic
import rizaio


client = anthropic.Anthropic()
riza = rizaio.Riza()

We’ll create a message for Claude that requires code execution to answer. Here we ask Claude to base32 encode the message “purple monkey dishwasher.”

messages = [
    {"role": "user", "content": "Please base32 encode this message: purple monkey dishwasher"},
]

Sending this message to Claude without offering any additional tools results in unexpected hallucinated output. Here are two consecutive responses to the above prompt.

Response 1
The base32 encoded version of the message "purple monkey dishwasher" is:

PRUPVY6ZCRMFZXGNLUMVXG64TFOQ
Response 2
Here is the base32 encoded version of the message "purple monkey dishwasher":

PRYXGEYRCUDQVS4

The results are inconsistent, and neither is correct (the correct result is OB2XE4DMMUQG233ONNSXSIDENFZWQ53BONUGK4Q=).

For a correct result, we can offer Claude a tool to execute Python.

Before sending the message to Claude, we’ll describe the Riza Code Interpreter API as a tool using a tool definition that Claude understands.

tools = [
    {
        "name": "execute_python",
        "description": "Execute a Python script. The Python runtime does not have filesystem access, but does include the entire standard library. Make HTTP requests with the httpx or requests libraries. Read input from stdin and write output to stdout.",
        "input_schema": {
            "type": "object",
            "properties": {
                "code": {
                    "type": "string",
                    "description": "The Python code to execute",
                }
            },
            "required": ["code"],
        },
    },
]

Now we can send the message and tool definition to Claude.

response = client.beta.tools.messages.create(
    model="claude-3-haiku-20240307",
    max_tokens=1024,
    tools=tools,
    messages=messages,
)

If Claude wants to use the available tool to execute Python code, the response will include a tool_use block with the execute_python name. This block will have a set of input parameters corresponding to the input properties we specified in our tool definition. In this case the parameter we care about is named code.

We take the code that Claude wants to run and execute it via the Riza Code Interpreter API.

If the execution is successful, we add the output as a tool_result message to send back. This step is optional, as the code execution output might be all you need.

for block in response.content:
    if block.type == 'tool_use' and block.name == 'execute_python':
        print("Executing Python code via Riza...")
        print(block.input['code'])

        output = riza.command.exec(
            language="PYTHON",
            code=block.input['code'],
        )
        print(output)

        if int(output.exit_code) > 0:
            raise ValueError(f"non-zero exit code {output.exit_code}")

        messages.append({
            "role": "assistant",
            "content": response.content,
        })
        messages.append({
            "role": "user",
            "content": [
                {
                    "type": "tool_result",
                    "tool_use_id": block.id,
                    "content": output.stdout,
                }
            ],
        })

If necessary, we can send the updated list of messages back to Claude to get a final answer incorporating the code execution output.

response = client.beta.tools.messages.create(
    model="claude-3-haiku-20240307",
    max_tokens=1024,
    tools=tools,
    messages=messages,
)
print(response)

With the extra tool in its belt, Claude comes up with the correct response.

Response
The base32 encoded version of the message "purple monkey dishwasher" is:

OB2XE4DMMUQG233ONNSXSIDENFZWQ53BONUGK4Q=

See the full example on GitHub.