Connecting bucket to the sandbox

To enable persistence of the data in the sandbox we can use a bucket to store the data. We are leveraging the fuse file system to mount the bucket to the sandbox.

You will need to build a custom sandbox template with the fuse file system installed. The guide how to build a custom sandbox template can be found here.

Google Cloud Storage

Prerequisites

To use the Google Cloud Storage we'll need to have a bucket and a service account. The service account can be created here, the bucket can be created here.

If you want to write to the bucket the service account must have the Storage Object Admin role for this bucket.

The guide how to create a service account key can be found here.

Mounting the bucket

To use the Google Cloud Storage we need to install the gcsfuse package. There's simple Dockerfile that can be used to create a container with the gcsfuse installed.

FROM ubuntu:latest

RUN apt-get update && apt-get install -y gnupg lsb-release wget

RUN lsb_release -c -s > /tmp/lsb_release
RUN GCSFUSE_REPO=$(cat /tmp/lsb_release); echo "deb https://packages.cloud.google.com/apt gcsfuse-$GCSFUSE_REPO main" | tee /etc/apt/sources.list.d/gcsfuse.list
RUN wget -O - https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -

RUN apt-get update && apt-get install -y gcsfuse

The actual mounting of the bucket is done in runtime during the start of the sandbox. The gcsfuse command is used to mount the bucket to the sandbox.

import { Sandbox } from 'e2b'

const sandbox = await Sandbox.create({ template: '<your template id>' })
await sandbox.filesystem.makeDir("/home/user/bucket")
await sandbox.uploadFile("key.json")

const process = await sandbox.process.start("sudo gcsfuse <flags> --key-file /home/user/key.json <bucket-name> /home/user/bucket")
const output = await process.wait()
if (output.exitCode) {
    throw Error(output.stderr)
}

Flags

The list of all flags can be found here.

Allow the default user to access the files

To allow the default user to access the files we can use the following flags:

-o allow_other -file-mode=777 -dir-mode=777

Amazon S3

For Amazon S3 we can use the s3fs package. The Dockerfile is similar to the one for the Google Cloud Storage.

FROM ubuntu:latest

RUN apt-get update && apt-get install s3fs

Similarly to the Google Cloud Storage, the actual mounting of the bucket is done in runtime during the start of the sandbox. The s3fs command is used to mount the bucket to the sandbox.

import { Sandbox } from 'e2b'

const sandbox = await Sandbox.create({ template: '<your template id>' })
await sandbox.filesystem.makeDir('/home/user/bucket')


// Create a file with the credentials
// If you use another path for the credentials you need to add the path in the command s3fs command
await sandbox.filesystem.write('/root/.passwd-s3fs', '<AWS_ACCESS_KEY_ID>:<AWS_SECRET_ACCESS_KEY>')
await sandbox.process.startAndWait('sudo chmod 600 /root/.passwd-s3fs')

const process = await sandbox.process.start('sudo s3fs <flags> <bucket-name> /home/user/bucket')
const output = await process.wait()
if (output.exitCode) {
    throw Error(output.stderr)
}

Flags

The list of all flags can be found here.

Allow the default user to access the files

To allow the default user to access the files add the following flag:

-o allow_other

Cloudflare R2

For Cloudflare R2 we can use very similar setup as for S3. The Dockerfile is the same as for S3. The mounting is slightly different, we need to specify the endpoint for R2.


import { Sandbox } from 'e2b'

const sandbox = await Sandbox.create({ template: '<your template id>' })
await sandbox.filesystem.makeDir('/home/user/bucket')

// Create a file with the R2 credentials
// If you use another path for the credentials you need to add the path in the command s3fs command
await sandbox.filesystem.write('/root/.passwd-s3fs', '<R2_ACCESS_KEY_ID>:<R2_SECRET_ACCESS_KEY>')
await sandbox.process.startAndWait('sudo chmod 600 /root/.passwd-s3fs')

const output = await sandbox.process.startAndWait('sudo s3fs -o url=https://<ACCOUNT ID>.r2.cloudflarestorage.com <flags> <bucket-name> /home/user/bucket')
if (output.exitCode) {
    throw Error(output.stderr)
}

Flags

It's the same as for S3.