Changelog
Desktop Sandbox for computer use
We are launching the E2B Desktop Sandbox. It's a secure virtual desktop powered by E2B and made specifically for LLM-powered computer use. Each E2B Sandbox is isolated from the others and can be customized with any dependencies you want.
We also added an example of computer use, using the E2B Desktop Sandbox.
SDK Reference
We published the SDK reference for the E2B CLI & Core SDK, Code Interpreter SDK, and Desktop Sandbox SDK. To learn more, see the corresponding page in documentation.
Support for interactive charts
We added support for interactive charts. E2B now automatically detects charts when executing Python code with runCode()
in JavaScript or run_code()
in Python. The Python code must include Matplotlib charts.
E2B sends the data of the chart back to the client, so you can then use your favorite chart library and create custom interactive charts.
See corresponding docs page, or try interactive charts in the E2B Analyst app.
Week #61
- Add support for Gemini Pro 1.5 002, Gemini Flash 002, and Mistral Small models
- Add proper sandbox duration limit to the API
Core & Code Interpreter SDK
- Fix RPC issue causing
Server disconnected without sending a response
Week #60
This week we improved the E2B Artifacts and built examples that test the capabilities of OpenAI's o1 model.
- Lift o1 rate-limits
- Allow to bring your own API keys
- Fix issue with E2B API keys not set correctly for recurring users
- Add rate-limit adjustable via env variables
We built an example that lets the o1 model write the code to train a machine learning model on a dataset from Kaggle.
We used the E2B Code Interpreter SDK for running the LLM-generated code tasks in a secure and isolated cloud environment. As LLMs, we used o1-mini to generate a thorough plan for the task with many code blocks, and GPT-4o-mini to extract the final code from the plan generated by o1-mini.
Week #59
Core & Code Interpreter SDK
- Indicate data from multiple plots in the response
- For histograms, indicate which axes the bar data should be displayed on
- Return tick labels in better format
- Add graph types + improve output data structure
- Propagate kernel restarts to user
- Add units in other contexts (e.g. tool tip)
- Fix terminal
pty
missing output - Add terminal (
pty
) support to Python SDK
- Update artifacts support for o1-preview and o1-mini
- Fix password reset
- Fix deleting templates without successful build
Week #58
Core & Code Interpreter SDK
- Ensure sequential execution when executing code in Code Interpreter
- Separate Python environment for the proxy webserver for Code Interpreter
- Extract data from notebook to beta SDK to allow user easily accessing data from DataFrames
- Fix an issue causing new kernel to not respect the provided working directory
- You can now iterate on the generated artifact by chatting with the LLM
- You can upload image to the models that support multi-modal input
- Added button to start new chat
- You can now download generated artifacts
- Improve startup speed of the Next.js template
Week #60
This week we improved the E2B Artifacts and built examples that test the capabilities of OpenAI's o1 model.
- Lift o1 rate-limits
- Allow to bring your own API keys
- Fix issue with E2B API keys not set correctly for recurring users
- Add rate-limit adjustable via env variables
We built an example that lets the o1 model write the code to train a machine learning model on a dataset from Kaggle.
We used the E2B Code Interpreter SDK for running the LLM-generated code tasks in a secure and isolated cloud environment. As LLMs, we used o1-mini to generate a thorough plan for the task with many code blocks, and GPT-4o-mini to extract the final code from the plan generated by o1-mini.
Week #57
- Improved UX
- Add your own personas and LLMs/providers
- Out of the box support for shadcn components
- Fixed a critical memory issue causing sporadic 504 and 503 errors for users
- Moved UFFD handling to orchestrator
- Fixed a bug with starting process with PTY more than once
- Increased limit for number of open files and connections
Week #56
- Added support for Vue.js, Gradio
- Added tool and output streaming
- General UI & UX improvements
- Improve the general reliability of the generated apps
- Fixed a memory bug causing 504 and 502 errors when interacting with the sandbox