SYSTEM_PROMPT = """You are a highly personalized AI assistant.MEMORY MANAGEMENT:1. When users share personal information, store it immediately2. Search for relevant context before responding3. Use past conversations to inform current responsesAlways be helpful while respecting privacy."""
This prompt guides the assistant’s behavior. It tells the AI to:
Be proactive about learning user preferences
Always search memory before responding
Respect privacy boundaries
The system prompt is injected at the start of every conversation, so the AI consistently follows these rules.
def normalize_email(email: str) -> str: return (email or "").strip().lower()def stable_user_id_from_email(email: str) -> str: norm = normalize_email(email) if not norm: raise ValueError("Email is required") return uuid.uuid5(uuid.NAMESPACE_DNS, norm).hex
Why normalize?"User@Mail.com" and " user@mail.com " should map to the same person. We trim whitespace and lowercase to ensure consistency.Why UUIDv5? It’s deterministic—same email always produces the same ID. This means:
User memories persist across sessions
No raw emails in logs or database tags
Privacy-preserving yet stable identity
We use uuid.NAMESPACE_DNS as the namespace to ensure uniqueness.
async def search_user_memories(query: str, container_tag: str) -> str: try: results = supermemory_client.search.memories( q=query, container_tag=container_tag, limit=5 ) if results.results: context = "\n".join([r.memory for r in results.results]) return f"Relevant memories:\n{context}" return "No relevant memories found." except Exception as e: return f"Error searching memories: {e}"
This searches the user’s memory store for context relevant to their current message.Parameters:
q: The search query (usually the user’s latest message)
container_tag: Isolates memories per user (e.g., user_abc123)
limit=5: Returns top 5 most relevant memories
Why search before responding? The AI can provide personalized answers based on what it knows about the user (e.g., dietary preferences, work context, communication style).Error handling: If memory search fails, we return a fallback message instead of crashing. The conversation continues even if memory has a hiccup.
metadata: Additional context (type of info, associated email)
Why metadata? Makes it easier to filter and organize memories later (e.g., “show me all personal_info memories”).Error handling: We log errors but don’t crash. Failing to save one memory shouldn’t break the entire conversation.
Convert email → stable user ID → container tag.The container tag (user_abc123) isolates this user’s memories from everyone else’s. Each user has their own “memory box.”
We take the user’s latest message, search for relevant memories, then inject them into the system prompt.Example:
Copy
Ask AI
Original: "What should I eat for breakfast?"Enhanced system message:"You are a helpful assistant... [system prompt]Relevant memories:- User is vegetarian- User works out at 6 AM- User prefers quick meals"
Now the AI can answer: “Try overnight oats with plant-based protein—perfect for post-workout!”
if "remember this" in user_message.lower(): await add_user_memory(user_message, container_tag, email=email)
After streaming completes, check if the user explicitly asked to remember something. If yes, store it.Why opt-in? Gives users control over what gets remembered. You could also make this automatic based on content analysis.
import streamlit as stimport requestsimport jsonimport uuidst.set_page_config(page_title="Personal AI Assistant", page_icon="🤖", layout="wide")def normalize_email(email: str) -> str: return (email or "").strip().lower()def stable_user_id_from_email(email: str) -> str: return uuid.uuid5(uuid.NAMESPACE_DNS, normalize_email(email)).hex# Session stateif 'messages' not in st.session_state: st.session_state.messages = []if 'user_name' not in st.session_state: st.session_state.user_name = Noneif 'email' not in st.session_state: st.session_state.email = Noneif 'user_id' not in st.session_state: st.session_state.user_id = Nonest.title("🤖 Personal AI Assistant")st.markdown("*Your AI that learns and remembers*")with st.sidebar: st.header("👤 User Profile") if not st.session_state.user_name or not st.session_state.email: name = st.text_input("What should I call you?") email = st.text_input("Email", placeholder="you@example.com") if st.button("Get Started"): if name and email: st.session_state.user_name = name st.session_state.email = normalize_email(email) st.session_state.user_id = stable_user_id_from_email(st.session_state.email) st.session_state.messages.append({ "role": "user", "content": f"Hi! My name is {name}." }) st.rerun() else: st.warning("Please enter both fields.") else: st.write(f"**Name:** {st.session_state.user_name}") st.write(f"**Email:** {st.session_state.email}") if st.button("Reset Conversation"): st.session_state.messages = [] st.rerun()if st.session_state.user_name and st.session_state.email: for message in st.session_state.messages: with st.chat_message(message["role"]): st.markdown(message["content"]) if prompt := st.chat_input("Message..."): st.session_state.messages.append({"role": "user", "content": prompt}) with st.chat_message("user"): st.markdown(prompt) with st.chat_message("assistant"): try: response = requests.post( "http://localhost:8000/chat", json={ "messages": st.session_state.messages, "email": st.session_state.email }, stream=True, timeout=30 ) if response.status_code == 200: full_response = "" for line in response.iter_lines(): if line: try: data = json.loads(line.decode('utf-8').replace('data: ', '')) if 'content' in data: full_response += data['content'] except: continue st.markdown(full_response) st.session_state.messages.append({"role": "assistant", "content": full_response}) else: st.error(f"Error: {response.status_code}") except Exception as e: st.error(f"Error: {e}")else: st.info("Please enter your profile in the sidebar")
Creates an OpenAI provider configured with your API key. The ! tells TypeScript “this definitely exists” (because we set it in .env.local).This provider object will be passed to streamText to specify which AI model to use.
const SYSTEM_PROMPT = `You are a highly personalized AI assistant.When users share personal information, remember it using the addMemory tool.Before responding, search your memories using searchMemories to provide personalized help.Always be helpful while respecting privacy.`
This guides the AI’s behavior and tells it:
When to use tools: Search memories before responding, add memories when users share info
Personality: Be helpful and personalized
Boundaries: Respect privacy
The AI SDK uses this to decide when to call searchMemories and addMemory tools automatically.
Convert email to a container tag for memory isolation.Simpler than Python: We skip UUID generation here for simplicity. In production, you might want to hash the email for privacy:
Copy
Ask AI
// Optional: More privacy-preserving approachimport crypto from 'crypto'const containerTag = `user_${crypto.createHash('sha256').update(email).digest('hex').slice(0, 16)}`
Catches any errors (API failures, tool errors, etc.) and returns a clean error response.Why log to console? In production, you’d send this to a monitoring service (Sentry, DataDog, etc.) to track issues.Key Differences from Python:
Aspect
Python
TypeScript
Memory Search
Manual search_user_memories() call
AI SDK calls searchMemories tool automatically
Memory Add
Manual add_user_memory() call
AI SDK calls addMemory tool automatically
Tool Decision
You decide when to search/add
AI decides based on conversation context
Streaming
Manual SSE formatting
toAIStreamResponse() handles it
Error Handling
Try/catch in each function
AI SDK handles tool errors
Python = Manual Control
You explicitly search and add memories. More control, more code.TypeScript = AI-Driven
The AI decides when to use tools. Less code, more “magic.”
Try these conversations to test memory:Personal Preferences:
Copy
Ask AI
User: "I'm Sarah, a product manager. I prefer brief responses."[Later]User: "What's a good way to prioritize features?"Assistant: [Should reference PM role and brevity preference]
Dietary & Lifestyle:
Copy
Ask AI
User: "Remember I'm vegan and work out at 6 AM."[Later]User: "Suggest a quick breakfast."Assistant: [Should suggest vegan options for pre/post workout]
Work Context:
Copy
Ask AI
User: "I'm working on a React project with TypeScript."[Later]User: "Help me with state management."Assistant: [Should suggest TypeScript-specific solutions]