Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 7 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -195,6 +195,13 @@ Optional: include the dashboard service profile:
docker compose --profile ui up --build -d
```

Using Doppler-managed config (recommended for hosted dashboard/API URLs):

```bash
cd OpenMemory
tools/ops/compose_with_doppler.sh up -d --build
```

Check service status:

```bash
Expand Down
4 changes: 4 additions & 0 deletions dashboard/.env.local.example
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
# OpenMemory Dashboard Configuration
NEXT_PUBLIC_API_URL=http://localhost:8080
# Set this if your backend has OM_API_KEY configured for authentication
NEXT_PUBLIC_API_KEY=your
Comment on lines +3 to +4
Copy link

Copilot AI Feb 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using NEXT_PUBLIC_API_KEY here to authenticate to a backend protected by OM_API_KEY will leak your server’s bearer token into the browser bundle, because all NEXT_PUBLIC_* values are embedded in the client-side JavaScript. An attacker who can load the dashboard can simply inspect the built JS or outgoing x-api-key header to recover the secret and then call protected write endpoints (e.g., /memory/add, /memory/delete) directly from scripts. To fix this, avoid putting any shared secret or OM_API_KEY-equivalent into NEXT_PUBLIC_API_KEY and instead keep authentication secrets server-only (e.g., OM_API_KEY plus proper user/session auth), while updating this example comment to make that constraint explicit.

Suggested change
# Set this if your backend has OM_API_KEY configured for authentication
NEXT_PUBLIC_API_KEY=your
# Optional: public, non-sensitive API key for the dashboard only.
# Do NOT put OM_API_KEY or any server-only secret here; NEXT_PUBLIC_* values are exposed to the browser.
NEXT_PUBLIC_API_KEY=public-demo-key

Copilot uses AI. Check for mistakes.
159 changes: 159 additions & 0 deletions dashboard/CHAT_SETUP.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,159 @@
# Chat Interface Setup

The chat interface is now connected to the OpenMemory backend and can query memories in real-time.

## Features

✅ **Memory Querying**: Searches your memory database for relevant content
✅ **Salience-based Results**: Shows top memories ranked by relevance
✅ **Memory Reinforcement**: Click the + button to boost memory importance
✅ **Real-time Updates**: Live connection to backend API
✅ **Action Buttons**: Quick actions after assistant responses

## Setup Instructions

### 1. Start the Backend

First, make sure the OpenMemory backend is running:

```bash
cd backend
npm install
npm run dev
```

The backend will start on `http://localhost:8080`

### 2. Configure Environment (Optional)

The dashboard is pre-configured to connect to `localhost:8080`. If your backend runs on a different port, create a `.env.local` file:

```bash
# dashboard/.env.local
NEXT_PUBLIC_API_URL=http://localhost:8080
```

### 3. Start the Dashboard

```bash
cd dashboard
npm install
npm run dev
```

The dashboard will start on `http://localhost:3000`

### 4. Add Some Memories

Before chatting, you need to add some memories to your database. You can do this via:

**Option A: API (Recommended for Testing)**

```bash
curl -X POST http://localhost:8080/memory/add \
-H "Content-Type: application/json" \
-d '{
"content": "JavaScript async/await makes asynchronous code more readable",
"tags": ["javascript", "async"],
"metadata": {"source": "learning"}
}'
```

**Option B: Use the SDK**

```javascript
// examples/js-sdk/basic-usage.js
import OpenMemory from '../../sdk-js/src/index.js';

const om = new OpenMemory('http://localhost:8080');

await om.addMemory({
content: 'React hooks revolutionized state management',
tags: ['react', 'hooks'],
});
```

**Option C: Ingest a Document**

```bash
curl -X POST http://localhost:8080/memory/ingest \
-H "Content-Type: application/json" \
-d '{
"content_type": "text",
"data": "Your document content here...",
"metadata": {"source": "document"}
}'
```

## How It Works

### Memory Query Flow

1. **User Input**: You ask a question in the chat
2. **Backend Query**: POST to `/memory/query` with your question
3. **Vector Search**: Backend searches HSG memory graph
4. **Results**: Top 5 memories returned with salience scores
5. **Response**: Chat generates answer based on retrieved memories

### Memory Reinforcement

Clicking the **+** button on a memory card:

- Sends POST to `/memory/reinforce`
- Increases memory salience by 0.1
- Makes it more likely to appear in future queries

## Current Features

✅ Real-time memory querying
✅ Salience-based ranking
✅ Memory reinforcement (boost)
✅ Sector classification display
✅ Error handling with backend status

## Coming Soon

- 🚧 LLM Integration (OpenAI, Ollama, Gemini)
- 🚧 Conversation memory persistence
- 🚧 Export chat to memories
- 🚧 WebSocket streaming responses
- 🚧 Quiz generation from memories
- 🚧 Podcast script generation

## Troubleshooting

### "Failed to query memories"

- Ensure backend is running: `npm run dev` in `backend/`
- Check backend is on port 8080: `curl http://localhost:8080/health`
- Verify CORS is enabled (already configured)

### "No memories found"

- Add memories using the API or SDK (see setup above)
- Try broader search terms
- Check memory content exists: `GET http://localhost:8080/memory/all`

### Connection refused

- Backend not started
- Wrong port in `.env.local`
- Firewall blocking connection

## API Endpoints Used

```typescript
POST /memory/query // Search memories
POST /memory/add // Add new memory
POST /memory/reinforce // Boost memory salience
GET /memory/all // List all memories
GET /memory/:id // Get specific memory
```

## Next Steps

1. Add LLM integration for intelligent responses
2. Implement conversation memory storage
3. Add streaming response support
4. Create memory export feature
5. Build quiz/podcast generators
13 changes: 12 additions & 1 deletion dashboard/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,12 @@ FROM node:20-alpine AS builder

WORKDIR /app

# Build-time public vars for Next.js client bundle
ARG NEXT_PUBLIC_API_URL=http://localhost:8080
ARG NEXT_PUBLIC_API_KEY=
ENV NEXT_PUBLIC_API_URL=${NEXT_PUBLIC_API_URL}
ENV NEXT_PUBLIC_API_KEY=${NEXT_PUBLIC_API_KEY}
Comment on lines +6 to +10
Copy link

Copilot AI Feb 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These ARG/ENV declarations mark NEXT_PUBLIC_API_KEY as a build-time public variable for the Next.js client bundle, so any value you pass (including a copy of OM_API_KEY) will be baked into the browser JavaScript and visible to all users. If operators follow the docs and reuse the backend’s OM_API_KEY here, an attacker can simply view the dashboard, extract the x-api-key from the bundled JS or network requests, and then invoke protected write endpoints on the backend directly. To fix this, do not use a NEXT_PUBLIC_* variable for any backend authentication secret and ensure that any server-only API keys (like OM_API_KEY) are never passed into these public build args/envs used by client code.

Copilot uses AI. Check for mistakes.

# Install dependencies
COPY package*.json ./
RUN npm install
Expand All @@ -18,14 +24,19 @@ FROM node:20-alpine AS production

WORKDIR /app

ARG NEXT_PUBLIC_API_URL=http://localhost:8080
ARG NEXT_PUBLIC_API_KEY=
ENV NEXT_PUBLIC_API_URL=${NEXT_PUBLIC_API_URL}
ENV NEXT_PUBLIC_API_KEY=${NEXT_PUBLIC_API_KEY}

# Install only production dependencies
COPY package*.json ./
RUN npm install --omit=dev

# Copy built assets from builder
COPY --from=builder /app/.next ./.next
COPY --from=builder /app/public ./public
COPY --from=builder /app/next.config.ts ./next.config.ts
COPY --from=builder /app/next.config.js ./next.config.js

# Create a dedicated non-root user for security
RUN addgroup -g 1001 -S nodejs \
Expand Down
36 changes: 36 additions & 0 deletions dashboard/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
This is a [Next.js](https://nextjs.org) project bootstrapped with [`create-next-app`](https://nextjs.org/docs/app/api-reference/cli/create-next-app).

## Getting Started

First, run the development server:

```bash
npm run dev
# or
yarn dev
# or
pnpm dev
# or
bun dev
```

Open [http://localhost:3000](http://localhost:3000) with your browser to see the result.

You can start editing the page by modifying `app/page.tsx`. The page auto-updates as you edit the file.

This project uses [`next/font`](https://nextjs.org/docs/app/building-your-application/optimizing/fonts) to automatically optimize and load [Geist](https://vercel.com/font), a new font family for Vercel.

## Learn More

To learn more about Next.js, take a look at the following resources:

- [Next.js Documentation](https://nextjs.org/docs) - learn about Next.js features and API.
- [Learn Next.js](https://nextjs.org/learn) - an interactive Next.js tutorial.

You can check out [the Next.js GitHub repository](https://github.com/vercel/next.js) - your feedback and contributions are welcome!

## Deploy on Vercel

The easiest way to deploy your Next.js app is to use the [Vercel Platform](https://vercel.com/new?utm_medium=default-template&filter=next.js&utm_source=create-next-app&utm_campaign=create-next-app-readme) from the creators of Next.js.

Check out our [Next.js deployment documentation](https://nextjs.org/docs/app/building-your-application/deploying) for more details.
109 changes: 109 additions & 0 deletions dashboard/app/api/settings/route.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,109 @@
import { NextResponse } from 'next/server'
import fs from 'fs'
import path from 'path'

const ENV_PATH = path.resolve(process.cwd(), '../.env')

function parseEnvFile(content: string): Record<string, string> {
const result: Record<string, string> = {}
const lines = content.split('\n')

for (const line of lines) {
const trimmed = line.trim()
if (!trimmed || trimmed.startsWith('#')) continue

const equalIndex = trimmed.indexOf('=')
if (equalIndex === -1) continue

const key = trimmed.substring(0, equalIndex).trim()
const value = trimmed.substring(equalIndex + 1).trim()
result[key] = value
}

return result
}

function serializeEnvFile(updates: Record<string, string>): string {
const lines: string[] = []

for (const [key, value] of Object.entries(updates)) {
lines.push(`${key}=${value}`)
}

return lines.join('\n')
}

export async function GET() {
try {
if (!fs.existsSync(ENV_PATH)) {
return NextResponse.json({
exists: false,
settings: {}
})
}

const content = fs.readFileSync(ENV_PATH, 'utf-8')
const settings = parseEnvFile(content)

const masked = { ...settings }
if (masked.OPENAI_API_KEY) masked.OPENAI_API_KEY = '***'
if (masked.GEMINI_API_KEY) masked.GEMINI_API_KEY = '***'
if (masked.AWS_SECRET_ACCESS_KEY) masked.AWS_SECRET_ACCESS_KEY = "***"
if (masked.OM_API_KEY) masked.OM_API_KEY = '***'

return NextResponse.json({
exists: true,
settings: masked
})
} catch (e: any) {
console.error('[Settings API] read error:', e)
return NextResponse.json(
{ error: 'internal', message: e.message },
{ status: 500 }
)
}
}

export async function POST(request: Request) {
try {
const updates = await request.json()

if (!updates || typeof updates !== 'object') {
return NextResponse.json(
{ error: 'invalid_body' },
{ status: 400 }
)
}

let content = ''
let envExists = false

if (fs.existsSync(ENV_PATH)) {
content = fs.readFileSync(ENV_PATH, 'utf-8')
envExists = true
} else {
const examplePath = path.resolve(process.cwd(), '../.env.example')
if (fs.existsSync(examplePath)) {
content = fs.readFileSync(examplePath, 'utf-8')
}
}

const existing = content ? parseEnvFile(content) : {}
const merged = { ...existing, ...updates }
const newContent = serializeEnvFile(merged)

fs.writeFileSync(ENV_PATH, newContent, 'utf-8')

return NextResponse.json({
ok: true,
created: !envExists,
message: 'Settings saved. Restart the backend to apply changes.'
})
} catch (e: any) {
console.error('[Settings API] write error:', e)
return NextResponse.json(
{ error: 'internal', message: e.message },
{ status: 500 }
)
}
}
Loading
Loading