SimpleMind: The Easy Path to AI Integration in Python
IA Comprehensive Guide to AI Integration on a Raspberry Pi 400 with Lightweight Model Anthropic’s Claude and Thonny IDE
This article was created using AI.
1. Introduction
SimpleMind is a powerful Python library that simplifies AI integration, offering a unified interface for various AI models. This guide focuses on installing and using AI on a Raspberry Pi 400, addressing specific challenges and optimizations for this platform.
This article provides a formal and detailed exploration of AI integration using the SimpleMind Python library. It serves as a comprehensive resource for individuals seeking to implement AI solutions seamlessly, particularly on a Raspberry Pi 400. This guide elaborates on the installation process and utilization of SimpleMind, catering to specific considerations for efficient AI integration on the Raspberry Pi 400 with Anthropic´s Claude and Thonny IDE.
2. Supported APIs and Models
SimpleMind supports a variety of AI providers and models, including:
The APIs remain identical between all supported providers / models:
To specify a specific provider or model, you can use the llm_provider
and llm_model
parameters when calling: generate_text
, generate_data
, or create_conversation
.
If you want to see Simplemind support additional providers or models, please send a pull request!
LLM | llm_provider | Default llm_model |
---|---|---|
Anthropic’s Claude | "anthropic" | "claude-3-5-sonnet-20241022" |
Amazon’s Bedrock | "amazon" | "anthropic.claude-3-5-sonnet-20241022-v2:0" |
Deepseek | "deepseek" | "deepseek-chat" |
Google’s Gemini | "gemini" | "models/gemini-1.5-pro" |
Groq’s Groq | "groq" | "llama3-8b-8192" |
Ollama | "ollama" | "llama3.2" |
OpenAI’s GPT | "openai" | "gpt-4o-mini" |
xAI’s Grok | "xai" | "grok-beta" |
Anthropic’s Claude as a lightweight AI model is ideal for use on a Raspberry Pi 400. Ollama, on the other hand, is not suitable.
Table of Contents
3. Adjustments using SimpleMind on Raspberry Pi 400
Installing and using SimpleMind on a Raspberry Pi 400 is possible but requires some adjustments:
- Use a virtual environment to isolate packages.
- Be aware of the Raspberry Pi’s performance limitations, especially in CPU-only mode.
- Ollama can be installed on the Raspberry Pi but runs in CPU mode, which slows down processing.
- An external GPU cannot be directly used with the Raspberry Pi 400.
For optimal performance:
- Update your operating system to the latest version.
- Monitor system resources while running SimpleMind.
- Consider using alternative, lightweight AI tools specifically optimized for the Raspberry Pi.
4. Performance Considerations
The Raspberry Pi 400 has limitations:
- No external GPU support
- Limited RAM (4GB or 8GB)
- CPU-only processing for AI models
To optimize performance:
- Use smaller, optimized models
- Increase swap space
- Consider overclocking (with caution)
- Monitor system resources during operation
5. Setting Up Your Raspberry Pi
Before installing SimpleMind, ensure your Raspberry Pi 400 is properly set up:
- Update your Raspberry Pi OS:
sudo apt update && sudo apt upgrade -y
- Install required dependencies:
sudo apt install python3-pip python3-venv -y
6. Installing SimpleMind
To avoid conflicts with system packages, we’ll use a virtual environment:
python3 -m venv ~/simplemind_env
3. Check the Python installation in your virtual environment:
which python
python --version
Check the path of the Python interpreter and the requests version:
python -c "import sys; print(sys.executable)"
python -c "import requests; print(requests.__version__)"
- Activate the virtual environment:
source ~/simplemind_env/bin/activate
- Install SimpleMind:
pip install 'simplemind[full]'
After activation, you should see ‘(venv)’ at the beginning of your command line. You can then check whether Simplemind has been installed correctly:
pip list | grep simplemind
Note: If you encounter an «externally-managed-environment» error, ensure you’re in the activated virtual environment.
7. Configuring API Keys
Set up your API keys as environment variables:
export ANTHROPIC_API_KEY="your-api-key-here"
This pattern allows you to keep your API keys private and out of your codebase.
Terminal command: When Anthropic is running, you can check the status of the service to see which port it is running on. To do this, use the following command in the terminal:
systemctl status anthropic
Replace «ANTHROPIC» easily with another chosen provider (e.g., XAI, DEEPSEEK, GROQ, etc.) from the table above in Chapter 2:
export GROQ_API_KEY="your-api-key-here"
Execute the command without spaces around the equals sign and it should work. To check whether the variable has been set correctly, you can then execute the following command:
echo $GROQ_API_KEY
8. Alternatively: Make your API Key persistent across sessions
Back with ANTHROPIC: Add this line to your ~/.bashrc or ~/.bash_profile file to make it persistent across sessions:
echo 'export ANTHROPIC_API_KEY="your-api-key-here"' >> ~/.bashrc
Then, reload your bash configuration:
source ~/.bashrc
9. Alternatively: Set the API Key directly in your Python script
You can also set the API key directly in your Python script, though this is less secure (!):
import os
os.environ['ANTHROPIC_API_KEY'] = 'your-api-key-here'
Verify that the API key is set correctly by printing it in your Python script:
import os
print(os.environ.get('ANTHROPIC_API_KEY'))
After setting the API key correctly, try running your script again. If you continue to have issues, double-check that your API key is valid and that you have sufficient credits in your Anthropic account.
10. Verify your environment in Thonny
Check in Thonny’s Shell whether the correct environment is being used:
import sys
print(sys.executable)
import requests
print(requests.__version__)
This should display the path to the Python installation in your virtual environment and output the version of requests without throwing errors.
/usr/bin/python3 is the path to the system-wide Python installation, not to your virtual environment.
If you have difficulties finding the exact path to your virtual environment, you can execute the following in your terminal:
which python
Note this path.
/usr/bin/python3 is the path to the system-wide Python installation, not to your virtual environment.
The requests version 2.28.1 is probably the version installed system-wide. This is not the desired configuration for using your virtual environment. To solve the problem, you can try the following steps:
- Close Thonny completely.
- Open your terminal and activate your virtual environment as followed
source /path/to/your/venv/bin/activate
Start Thonny from this terminal:
thonny &
The ‘&’ at the end starts Thonny in the background and gives you back control of the terminal. If this does not work, you can also try to set the interpreter manually:
In the Thonny settings, select ‘A specific Python interpreter or virtual environment’.
Enter the full path to the Python execution file in your virtual environment:
/home/user/path/to/venv/bin/python
Click on ‘OK’ and restart Thonny.
After these changes, Thonny should use the Python installation from your virtual environment. Check this by executing the following in the Thonny shell:
import sys
print(sys.executable)
import requests
print(requests.__version__)
This approach should force Thonny to use the Python interpreter from your virtual environment. If it works, you can always start Thonny this way in the future when you want to work with your virtual environment.
11. Basic SimpleMind Functions
Here are some fundamental SimpleMind operations:
Text Generation
import simplemind as sm
response = sm.generate_text(
prompt="What is the main purpose of SimpleMind?",
llm_provider="anthropic",
llm_model="claude-3-5-sonnet-20241022"
)
print(response)
Conversational AI
SimpleMind also allows for easy conversational flows:
import simplemind as sm
# Create a conversation
conv = sm.create_conversation()
# Add a message to the conversation
conv.add_message("user", "Hi there, how are you?")
# Send the message and get a response
response = conv.send()
# Print the response
print(response)
12. Advanced Features
Structured Data with the Pydantic model
You can use Pydantic models to structure the response from the LLM, if the LLM supports it:
from pydantic import BaseModel
class Poem(BaseModel):
title: str
content: str
poem = sm.generate_data("Write a poem about AI", response_model=Poem)
print(f"Title: {poem.title}\n\n{poem.content}")
Tools and Function Calls
Tools (also known as functions) let you call any Python function from your AI conversations. Here’s an example:
@sm.tool(llm_provider="anthropic")
def get_weather(location: str, unit: str = "celsius"):
"""Returns the current weather for a location."""
# Implementation here
return f"20 {unit}"
conv = sm.create_conversation()
conv.add_message("user", "What's the weather in Berlin?")
response = conv.send(tools=[get_weather])
print(response)
13. Troubleshooting Common Issues
- API Key Errors: Ensure your API keys are correctly set and have sufficient credit.
- Memory Issues: If you encounter out-of-memory errors, try using smaller models or increasing swap space.
- Performance Problems: On Raspberry Pi, some operations may be slower. Consider using lighter models or optimizing your code.
14: Exit the Virtual Environment
Exit the virtual environment and get back to your local environment:
deactivate
15. Conclusion
While running SimpleMind on a Raspberry Pi 400 presents challenges, it’s a cost-effective way to explore AI integration. By following this guide, you can set up a functional AI development environment on your Raspberry Pi, opening doors to exciting projects and experiments.
16. Sources
Special thanks to Kenneth Reitz for the inspiration to this project with Raspberry Pi! You will find the original documentation of Kenneth Reitz in «1. SimpleMinde Documentation».
Accessed on: February 12, 2025