Connecting bucket to the sandbox

To connect a bucket for storing data from the sandbox, we will use the FUSE file system to mount the bucket to the sandbox.

You will need to create a custom sandbox template with the FUSE file system installed. The guide for building a custom sandbox template can be found here.

Google Cloud Storage

Prerequisites

To use Google Cloud Storage, you'll need a bucket and a service account. You can create a service account here and a bucket here.

If you want to write to the bucket, make sure the service account has the Storage Object User role for this bucket.

You can find a guide on creating a service account key here.

Mounting the bucket

To use the Google Cloud Storage we need to install the gcsfuse package. There's simple Dockerfile that can be used to create a container with the gcsfuse installed.

FROM e2bdev/code-interpreter:latest

RUN apt-get update && apt-get install -y gnupg lsb-release wget

RUN lsb_release -c -s > /tmp/lsb_release
RUN GCSFUSE_REPO=$(cat /tmp/lsb_release) && echo "deb https://packages.cloud.google.com/apt gcsfuse-$GCSFUSE_REPO main" | tee /etc/apt/sources.list.d/gcsfuse.list
RUN wget -O - https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -

RUN apt-get update && apt-get install -y gcsfuse

The bucket is mounted during the sandbox runtime using the gcsfuse command.

import { Sandbox } from 'e2b'

const sandbox = await Sandbox.create('<your template id>')
await sandbox.files.makeDir('/home/user/bucket')
await sandbox.files.write('key.json', '<your service account key>')

await sandbox.commands.run('sudo gcsfuse <flags> --key-file /home/user/key.json <bucket-name> /home/user/bucket')

Flags

The complete list of flags is available here.

Allow the default user to access the files

To allow the default user to access the files, we can use the following flags:

-o allow_other -file-mode=777 -dir-mode=777

Amazon S3

To use Amazon S3, we can use the s3fs package. The Dockerfile setup is similar to that of Google Cloud Storage.

FROM ubuntu:latest

RUN apt-get update && apt-get install s3fs

Similar to Google Cloud Storage, the bucket is mounted during the runtime of the sandbox. The s3fs command is used to mount the bucket to the sandbox.

import { Sandbox } from 'e2b'

const sandbox = await Sandbox.create('<your template id>')
await sandbox.files.makeDir('/home/user/bucket')

// Create a file with the credentials
// If you use another path for the credentials you need to add the path in the command s3fs command
await sandbox.files.write('/root/.passwd-s3fs', '<AWS_ACCESS_KEY_ID>:<AWS_SECRET_ACCESS_KEY>')
await sandbox.commands.run('sudo chmod 600 /root/.passwd-s3fs')

await sandbox.commands.run('sudo s3fs <flags> <bucket-name> /home/user/bucket')

Flags

The complete list of flags is available here.

Allow the default user to access the files

To allow the default user to access the files, add the following flag:

-o allow_other

Cloudflare R2

For Cloudflare R2, we can use a setup very similar to S3. The Dockerfile remains the same as for S3. However, the mounting differs slightly; we need to specify the endpoint for R2.

import { Sandbox } from 'e2b'

const sandbox = await Sandbox.create({ template: '<your template id>' })
await sandbox.files.makeDir('/home/user/bucket')

// Create a file with the R2 credentials
// If you use another path for the credentials you need to add the path in the command s3fs command
await sandbox.files.write('/root/.passwd-s3fs', '<R2_ACCESS_KEY_ID>:<R2_SECRET_ACCESS_KEY>')
await sandbox.commands.run('sudo chmod 600 /root/.passwd-s3fs')

await sandbox.commands.run('sudo s3fs -o url=https://<ACCOUNT ID>.r2.cloudflarestorage.com <flags> <bucket-name> /home/user/bucket')

Flags

It's the same as for S3.