Hello World.ts

This JavaScript guide will show you the basics of E2B:

  • Connect the code interpreter to an LLM
  • Prompt the LLM to generate the Python code
  • Execute the AI-generated Python code in a secure E2B sandbox

Get full code

Check out the full code in our cookbook.

Overview

  1. Install Code Interpreter SDK
  2. Prepare prompt and tools for LLM
  3. Prepare code interpreting
  4. Call LLM and parse response with tools
  5. Create code interpreter and run everything
  6. Save generated chart

1. Install Code Interpreter SDK

The Code Interpreter SDK allows you to run AI-generated code in a secure small VM - E2B sandbox - made for AI code execution. Inside the sandbox is a Jupyter server running that you can control from our SDK through the notebook.execCell() method.

Check out the SDK's repository on GitHub.

npm init -y \
&& npm i --save-dev typescript tsx @types/node \
&& npm i @e2b/code-interpreter

Get your E2B API key here and save it to .env in your root directory.

.env

E2B_API_KEY="e2b-api-key"

2. Prepare prompt and tools for LLM

We'll be using Anthropic's Claude 3 Opus model but E2B works with any LLM so feel free to pick any you want!

Create the model.ts file and paste the following code.

model.ts

import { Tool } from '@anthropic-ai/sdk/src/resources/beta/tools'

export const MODEL_NAME = 'claude-3-opus-20240229'

export const SYSTEM_PROMPT = `
## your job & context
you are a python data scientist. you are given tasks to complete and you run python code to solve them.
- the python code runs in jupyter notebook.
- every time you call \`execute_python\` tool, the python code is executed in a separate cell. it's okay to multiple calls to \`execute_python\`.
- display visualizations using matplotlib or any other visualization library directly in the notebook. don't worry about saving the visualizations to a file.
- you have access to the internet and can make api requests.
- you also have access to the filesystem and can read/write files.
- you can install any pip package (if it exists) if you need to but the usual packages for data analysis are already preinstalled.
- you can run any python code you want, everything is running in a secure sandbox environment.
`

export const tools: Tool[] = [
  {
    name: 'execute_python',
    description: 'Execute python code in a Jupyter notebook cell and returns any result, stdout, stderr, display_data, and error.',
    input_schema: {
      type: 'object',
      properties: {
        code: {
          type: 'string',
          description: 'The python code to execute in a single cell.'
        }
      },
      required: ['code']
    }
  }
]

This defines our system prompt and the tools dictionary with available tools for the LLM - namely the "execute_python" tool. A little bit later, we'll connect "execute_python" to the E2B's code interpretrer.

3. Prepare code interpreting

We'll create a new function called codeInterpret() in a separate file codeInterpreter.ts.

codeInterpreter.ts

import { CodeInterpreter } from '@e2b/code-interpreter'

export async function codeInterpret(codeInterpreter: CodeInterpreter, code: string) {
  console.log(`\n${'='.repeat(50)}\n> Running following AI-generated code:\n${code}\n${'='.repeat(50)}`);

  const exec = await codeInterpreter.notebook.execCell(
    code,
    {
      // You can stream logs from the code interpreter
      // onStderr: (stderr: string) => console.log("\n[Code Interpreter stdout]", stderr),
      // onStdout: (stdout: string) => console.log("\n[Code Interpreter stderr]", stdout),
      //
      // You can also stream additional results like charts, images, etc.
      // onResult: ...
    }
  )

  if (exec.error) {
    console.log('[Code Interpreter error]', exec.error) // Runtime error
    return undefined
  }

  return exec
}

This function takes the CodeInterpreter object from our SDK, and code as paramaters. The code parameter is the code generated by the LLM.

Inside the function, we call the codeInterpreter.notebook.execCell() method. The execCell() takes code argument, and executes this code inside E2B sandbox.

4. Call LLM and parse response with tools

We're using Claude 3 Opus. Get your Anthropic API key, save it to .env file, and install the Anthropic SDK.

.env

ANTHROPIC_API_KEY="anthropic-api-key"
npm i @anthropic-ai/sdk

Now we'll put everything together. Create the index.ts file, import dependencies, and create the chat() function that will do the LLM calling and tool parsing.

index.ts

import * as fs from 'fs'

import 'dotenv/config'
import { CodeInterpreter, Execution } from '@e2b/code-interpreter'
import Anthropic from '@anthropic-ai/sdk'

import {
  MODEL_NAME,
  SYSTEM_PROMPT,
  tools,
} from './model'
import { codeInterpret } from './codeInterpreter'

const anthropic = new Anthropic()

async function chat(codeInterpreter: CodeInterpreter, userMessage: string): Promise<Execution | undefined> {
  console.log('Waiting for Claude...')

  const msg = await anthropic.beta.tools.messages.create({
    model: MODEL_NAME,
    system: SYSTEM_PROMPT,
    max_tokens: 4096,
    messages: [{role: 'user', content: userMessage}],
    tools,
  })

  console.log(`\n${'='.repeat(50)}\nModel response: ${msg.content}\n${'='.repeat(50)}`)
  console.log(msg)

  if (msg.stop_reason === 'tool_use') {
    const toolBlock = msg.content.find((block) => block.type === 'tool_use');
    const toolName = toolBlock.name
    const toolInput = toolBlock.input

    console.log(`\n${'='.repeat(50)}\nUsing tool: ${toolName}\n${'='.repeat(50)}`);

    if (toolName === 'execute_python') {
      const code = toolInput.code
      return codeInterpret(codeInterpreter, code)
    }
    return undefined
  }
}

5. Create code interpreter and run everything

Now we put all together, and run our program. In the end of index.ts add following code prompting the LLM to visualize a distribution of height of men and print the median.

index.ts

async function run() {
  const userMessage = 'Visualize a distribution of height of men based on the latest data you know. Also print the median value.'

  const codeInterpreter = await CodeInterpreter.create()

  const codeOutput = await chat(codeInterpreter, userMessage)
  if (!codeOutput) {
    console.log('No code output')
    return
  }

  const logs = codeOutput.logs
  console.log(logs)

  if (codeOutput.results.length == 0) {
    console.log('No results')
    return
  }

  const firstResult = codeOutput.results[0]
  // Print description of the first rich result
  console.log(firstResult.text)

  await codeInterpreter.close()
}

run()

After running your code with the following command

$ tsx index.ts

you should see similar results to this:

stdout=['The median male height is 175.5 cm\n'] stderr=[]
<Figure size 800x400 with 1 Axes>

We got our median in the logs (stdout, and stderr) but we also something intering in firstResult.

<Figure size 800x400 with 1 Axes>

6. Save generated chart

This looks like a plot. Let's save it to a file. Update the run() function like this, and run the code again with tsx index.ts in your terminal.

index.ts

async function run() {
  const userMessage = 'Visualize a distribution of height of men based on the latest data you know. Also print the median value.'

  const codeInterpreter = await CodeInterpreter.create()

  const codeOutput = await chat(codeInterpreter, userMessage)
  if (!codeOutput) {
    console.log('No code output')
    return
  }

  const logs = codeOutput.logs
  console.log(logs)

  if (codeOutput.results.length == 0) {
    console.log('No results')
    return
  }

  const firstResult = codeOutput.results[0]
  // Print description of the first rich result
  console.log(firstResult.text)

  // If we received a chart in PNG form, we can visualize it
  if (firstResult.png) {
      // Decode the base64 encoded PNG data
      const pngData = Buffer.from(firstResult.png, 'base64');

      // Generate a unique filename for the PNG
      const filename = 'chart.png';

      // Save the decoded PNG data to a file
      fs.writeFileSync(filename, pngData);

      console.log(`Saved chart to ${filename}`);
  }

  await codeInterpreter.close()
}

run()

The chart got saved in the chart.png file and it should look similar to this:

Chart visualizing distribution height of men