Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -3,3 +3,5 @@
.ipynb_checkpoints
eval_results_python.csv
.env
.env.local
/node_modules
40 changes: 40 additions & 0 deletions 16-matthew-mcconaughey/host-your-own/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
# See https://help.github.com/articles/ignoring-files/ for more about ignoring files.

# dependencies
/node_modules
/.pnp
.pnp.js

# testing
/coverage

# next.js
/.next/
/out/

# production
/build

# misc
.DS_Store
*.pem

# debug
npm-debug.log*
yarn-debug.log*
yarn-error.log*

# local env files
.env*.local
.env

# vercel
.vercel

# typescript
*.tsbuildinfo
next-env.d.ts

# data files from notebook
/data

131 changes: 131 additions & 0 deletions 16-matthew-mcconaughey/host-your-own/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,131 @@
# Matthew McConaughey AI Chat

Host the agent you just made w/Nextjs and Vercel!

A clean, simple Next.js chat interface for interacting with a Contextual AI agent embodying Matthew McConaughey's wisdom, philosophy, and life lessons.

## Features

- Clean, modern chat interface
- Subtle space-themed design with McConaughey aesthetics
- Real-time conversation with AI agent
- Responsive design for mobile and desktop
- Powered by Contextual AI

## Prerequisites

- Node.js 18+ installed
- A Contextual AI API key
- Your Matthew McConaughey agent ID (from your notebook)

## Setup

1. **Install dependencies:**
```bash
npm install
```

2. **Set up environment variables:**

Copy the example environment file:
```bash
cp .env.local.example .env.local
```

Edit `.env.local` and add your credentials:
```
CONTEXTUAL_API_KEY=your_api_key_here
CONTEXTUAL_AGENT_ID=your_agent_id_here
```

3. **Find your Agent ID:**

You can get your agent ID from your Jupyter notebook where you created the agent. Look for the output after running the agent creation code:
```python
print(f"Agent ID created: {agent_id}")
```

Or retrieve it by listing your agents:
```python
agents = client.agents.list()
for agent in agents:
if agent.name == "Matthew_McConaughey":
print(f"Agent ID: {agent.id}")
```

## Running the App

Start the development server:

```bash
npm run dev
```

Open [http://localhost:3000](http://localhost:3000) in your browser.

## Building for Production

Build the application:

```bash
npm run build
```

Start the production server:

```bash
npm start
```

## Deployment

This Next.js app can be easily deployed to:

- **Vercel** (recommended): Connect your GitHub repo at [vercel.com](https://vercel.com)
- **Netlify**: Follow their Next.js deployment guide
- **Any Node.js hosting**: Run `npm run build` and `npm start`

Don't forget to set your environment variables (`CONTEXTUAL_API_KEY` and `CONTEXTUAL_AGENT_ID`) in your deployment platform's settings.

## Project Structure

```
├── app/
│ ├── api/
│ │ └── chat/
│ │ └── route.ts # API endpoint for Contextual AI
│ ├── globals.css # Global styles and animations
│ ├── layout.tsx # Root layout
│ └── page.tsx # Main chat interface
├── .env.local.example # Environment variables template
├── next.config.js # Next.js configuration
├── tailwind.config.js # Tailwind CSS configuration
└── package.json # Dependencies and scripts
```

## Customization

- **Colors**: Edit `tailwind.config.js` to change the color scheme
- **Suggested queries**: Update the `suggestedQueries` array in `app/page.tsx`
- **Styling**: Modify `app/globals.css` and component styles in `app/page.tsx`

## Troubleshooting

**"API key not configured" error:**
- Make sure you've created `.env.local` from `.env.local.example`
- Verify your API key is correct
- Restart the development server after adding environment variables

**"Agent ID not configured" error:**
- Add your agent ID to `.env.local`
- Make sure the agent exists in your Contextual AI account

**No response from agent:**
- Check your API key has the correct permissions
- Verify your agent is properly configured with a datastore
- Check the browser console and terminal for error messages

## License

MIT

183 changes: 183 additions & 0 deletions 16-matthew-mcconaughey/host-your-own/app/api/chat/route.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,183 @@
import { NextResponse } from 'next/server'
import ContextualAI from 'contextual-client'

interface Message {
role: 'user' | 'assistant'
content: string
}

export async function POST(request: Request) {
try {
const { messages } = await request.json()

if (!messages || messages.length === 0) {
return NextResponse.json(
{ error: 'No messages provided' },
{ status: 400 }
)
}

const apiKey = process.env.CONTEXTUAL_API_KEY
const agentId = process.env.CONTEXTUAL_AGENT_ID

if (!apiKey) {
return NextResponse.json(
{ error: 'API key not configured. Please set CONTEXTUAL_API_KEY in .env.local' },
{ status: 500 }
)
}

if (!agentId) {
return NextResponse.json(
{ error: 'Agent ID not configured. Please set CONTEXTUAL_AGENT_ID in .env.local' },
{ status: 500 }
)
}

const encoder = new TextEncoder()

const sseStream = new ReadableStream<Uint8Array>({
start: async (controller) => {
const encodeSSE = (data: string) => encoder.encode(`data: ${data}\n\n`)
const encodeComment = (comment: string) => encoder.encode(`: ${comment}\n\n`)

controller.enqueue(encodeComment('stream-start'))

let heartbeat: NodeJS.Timeout | null = null
const startHeartbeat = () => {
heartbeat = setInterval(() => {
try {
controller.enqueue(encodeComment('keep-alive'))
} catch (_) {
}
}, 15000)
}

const stopHeartbeat = () => {
if (heartbeat) {
clearInterval(heartbeat)
heartbeat = null
}
}

try {
let observedMessageId: string | null = null
const observedContentIds: string[] = []
let observeBuffer = ''
const observeChunk = (text: string) => {
observeBuffer += text
if (observeBuffer.includes('\r')) observeBuffer = observeBuffer.replace(/\r\n/g, '\n').replace(/\r/g, '\n')
while (true) {
const sep = observeBuffer.indexOf('\n\n')
if (sep === -1) break
const raw = observeBuffer.slice(0, sep)
observeBuffer = observeBuffer.slice(sep + 2)
const lines = raw.split('\n')
if (lines.every(l => l.startsWith(':'))) continue
const dataPayload = lines.filter(l => l.startsWith('data:')).map(l => l.slice(5).trimStart()).join('\n')
if (!dataPayload) continue
try {
const evt = JSON.parse(dataPayload)
if (evt?.event === 'metadata') {
if (evt.data?.message_id) observedMessageId = evt.data.message_id
} else if (evt?.event === 'retrievals') {
const contents = evt.data?.contents || []
for (const c of contents) {
const cid = c?.content_id
if (cid && !observedContentIds.includes(cid)) observedContentIds.push(cid)
}
}
} catch (_) {

}
}
}

const upstream = await fetch(`https://api.contextual.ai/v1/agents/${agentId}/query?include_retrieval_content_text=true`, {
method: 'POST',
headers: {
Authorization: `Bearer ${apiKey}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
messages: messages.map((msg: Message) => ({ role: msg.role, content: msg.content })),
stream: true,
}),
})

if (!upstream.ok || !upstream.body) {
const text = await upstream.text().catch(() => '')
const errMsg = `Upstream error ${upstream.status}: ${text}`
console.error('[ctxl-stream] upstream failed:', errMsg)
controller.enqueue(encodeSSE(JSON.stringify({ error: errMsg })))
controller.close()
stopHeartbeat()
return
}

startHeartbeat()

const reader = upstream.body.getReader()
while (true) {
const { done, value } = await reader.read()
if (done) break
if (value) {
controller.enqueue(value)
try {
const chunkStr = new TextDecoder().decode(value)
observeChunk(chunkStr)
} catch (_) {

}
}
}

try {
if (observedMessageId && observedContentIds.length > 0) {
const client = new ContextualAI({ apiKey })
const retrievalInfo = await client.agents.query.retrievalInfo(
agentId,
observedMessageId,
{ content_ids: observedContentIds }
)
const contentMetadatas = retrievalInfo?.content_metadatas || []
controller.enqueue(encodeSSE(JSON.stringify({ event: 'content_metadatas', data: { content_metadatas: contentMetadatas } })))
}
} catch (e) {
console.error('[ctxl-stream] retrievalInfo failed:', e)
}

controller.enqueue(encodeComment('stream-end'))
controller.close()
stopHeartbeat()
} catch (err: any) {
console.error('[ctxl-stream] error:', err?.message || err)
try {
controller.enqueue(encodeSSE(JSON.stringify({ error: String(err?.message || err) })))
} finally {
controller.close()
stopHeartbeat()
}
}
},
cancel: () => {
},
})

return new Response(sseStream, {
headers: {
'Content-Type': 'text/event-stream; charset=utf-8',
'Cache-Control': 'no-cache, no-transform',
Connection: 'keep-alive',
'X-Accel-Buffering': 'no',
},
})
} catch (error: any) {
console.error('Error in chat API:', error)
return NextResponse.json(
{ error: 'Internal server error' },
{ status: 500 }
)
}
}

Loading
Loading