-
-
Notifications
You must be signed in to change notification settings - Fork 2
/
.cursorrules
432 lines (349 loc) · 12.7 KB
/
.cursorrules
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
Below is a formatted system prompt consolidating all given instructions and guidelines, ensuring that all details—including those about the not-well-known “screenpipe” API—are preserved. This system prompt is intended for another AI to follow during its execution.
System Prompt:
always console log like this:
console.log("0xHypr", 'variableName', variable )
Assistant Guidelines:
• When i use ai models most likely i use Model: gpt-4o
• Development and Packaging:
• Use pnpm as the package manager.
• Maintain a monorepo structure.
• Follow kebab-case for all file and folder naming conventions.
• Utilize TypeScript and React throughout the project.
• Implement a “return early” coding pattern to enhance code readability and error handling.
Project Structure:
- Main app is in packages/desktop
- Frontend: packages/desktop/src/renderer
- Backend: packages/desktop/src/electron
- packages/landing-v0 containts landing page and in the future all account logins and signups
UI and Styling:
• Integrate the shadcn/ui components library.
• Priority in UI design: Ease of Use > Aesthetics > Performance.
• Employ Tailwind CSS for utility-first styling and rapid UI development.
Recommended Libraries and Tools:
1. State Management:
• Use React Context API for simple state management needs.
• Use Zustand for lightweight, scalable state management compatible with React Server Components.
2. Form Handling:
• Employ React Hook Form for efficient, flexible form handling and built-in validation features.
3. Data Fetching:
• Use TanStack Query (React Query) for efficient data fetching, caching, and revalidation flows.
4. Authentication:
• Integrate authentication via Clerk.
5. Animations:
• Use Framer Motion for smooth animations and transitions.
6. Icons:
• Incorporate the Lucide React icon set for a wide array of open-source icons.
AI Integration with Vercel AI SDK:
• Utilize the Vercel AI SDK (TypeScript toolkit) to build AI-driven features in React/Next.js.
• Implement conversational UIs using the useChat hook, which manages chat states and streams AI responses.
Using Tools with useChat and streamText:
• Tool Types:
• Server-side tools auto-executed on the server.
• Client-side tools auto-executed on the client.
• User-interactive tools that require a confirmation dialog or user input.
• Workflow:
1. The user inputs a message in the chat UI.
2. The message is sent to the API route.
3. The language model may generate tool calls using streamText.
4. Tool calls are forwarded to the client.
5. Server-side tools execute server-side and return results to the client.
6. Client-side tools execute automatically via the onToolCall callback.
7. User-interactive tools display a confirmation dialog. The user’s choice is handled via toolInvocations.
8. After the user interacts, use addToolResult to incorporate the final result into the chat.
9. If tool calls exist in the last message and all results are now available, the client re-sends messages to the server for further processing.
• Note: Set maxSteps > 1 in useChat options to enable multiple iterations. By default, multiple iterations are disabled for compatibility reasons.
Example Implementation:
• API Route Example (app/api/chat/route.ts):
import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';
import { z } from 'zod';
// Allow streaming responses up to 30 seconds
export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();
const result = await streamText({
model: openai('gpt-4o'),
messages,
tools: {
// Server-side tool:
getWeatherInformation: {
description: 'Show the weather in a given city to the user.',
parameters: z.object({ city: z.string() }),
execute: async ({ city }: { city: string }) => {
const weatherOptions = ['sunny', 'cloudy', 'rainy', 'snowy', 'windy'];
return `${city} is currently ${weatherOptions[Math.floor(Math.random() * weatherOptions.length)]}.`;
},
},
// Client-side, user-interactive tool:
askForConfirmation: {
description: 'Ask the user for confirmation.',
parameters: z.object({
message: z.string().describe('The message to ask for confirmation.'),
}),
},
// Automatically executed client-side tool:
getLocation: {
description: 'Get the user location after confirmation.',
parameters: z.object({}),
},
},
});
return result.toDataStreamResponse();
}
• generateObject() Usage Example:
import { openai } from '@ai-sdk/openai';
import { generateObject } from 'ai';
import { z } from 'zod';
const { object } = await generateObject({
model: openai('gpt-4-turbo'),
schema: z.object({
recipe: z.object({
name: z.string(),
ingredients: z.array(z.string()),
steps: z.array(z.string()),
}),
}),
prompt: 'Generate a lasagna recipe.',
});
console.log(JSON.stringify(object, null, 2));
• Client-Side Page Example (app/page.tsx):
'use client';
import { ToolInvocation } from 'ai';
import { Message, useChat } from 'ai/react';
export default function Chat() {
const {
messages,
input,
handleInputChange,
handleSubmit,
addToolResult,
} = useChat({
maxSteps: 5,
async onToolCall({ toolCall }) {
if (toolCall.toolName === 'getLocation') {
const cities = ['New York', 'Los Angeles', 'Chicago', 'San Francisco'];
return {
city: cities[Math.floor(Math.random() * cities.length)],
};
}
},
});
return (
<>
{messages?.map((m: Message) => (
<div key={m.id}>
<strong>{m.role}:</strong> {m.content}
{m.toolInvocations?.map((toolInvocation: ToolInvocation) => {
const toolCallId = toolInvocation.toolCallId;
const addResult = (result: string) => addToolResult({ toolCallId, result });
if (toolInvocation.toolName === 'askForConfirmation') {
return (
<div key={toolCallId}>
{toolInvocation.args.message}
<div>
{'result' in toolInvocation ? (
<b>{toolInvocation.result}</b>
) : (
<>
<button onClick={() => addResult('Yes')}>Yes</button>
<button onClick={() => addResult('No')}>No</button>
</>
)}
</div>
</div>
);
}
return 'result' in toolInvocation ? (
<div key={toolCallId}>
<em>Tool ({toolInvocation.toolName}):</em> {toolInvocation.result}
</div>
) : (
<div key={toolCallId}>
<em>Executing {toolInvocation.toolName}...</em>
</div>
);
})}
<br />
</div>
))}
<form onSubmit={handleSubmit}>
<input value={input} onChange={handleInputChange} placeholder="Type your message..." />
</form>
</>
);
}
Additional Notes:
• Ensure proper handling of all tool invocations to maintain a seamless user experience.
• Regularly update dependencies and libraries to their latest versions for improved performance, security, and stability.
• Thoroughly test the chatbot to handle edge cases and unexpected user inputs.
Access to Screenpipe:
This project uses a local service named screenpipe that watches screen and audio, storing data in a local database. The application (hyprsqrl) integrates with the screenpipe API (accessible at http://localhost:3030) to gather OCR data and other information, potentially using it to create tasks or calendar events.
Screenpipe API Reference:
• Search API:
• Endpoint: GET /search
• Description: Searches captured data (OCR text, audio transcription, UI elements) from the local database.
• Query Parameters:
• q (string, optional): A single-word search term.
• content_type (enum): One of ocr, audio, or ui.
• limit (int, default=20): Maximum results per page.
• offset (int): Pagination offset.
• start_time (timestamp, optional): Filter by start timestamp.
• end_time (timestamp, optional): Filter by end timestamp.
• app_name (string, optional): Filter by application name.
• window_name (string, optional): Filter by window name.
• include_frames (bool, optional): Include base64 encoded frame data.
• min_length (int, optional): Minimum content length.
• max_length (int, optional): Maximum content length.
• speaker_ids (int[], optional): Filter by speaker IDs.
Sample Request:
curl "http://localhost:3030/search?q=meeting&content_type=ocr&limit=10"
Sample Response:
{
"data": [
{
"type": "OCR",
"content": {
"frame_id": 123,
"text": "meeting notes",
"timestamp": "2024-03-10T12:00:00Z",
"file_path": "/frames/frame123.png",
"offset_index": 0,
"app_name": "chrome",
"window_name": "meeting",
"tags": ["meeting"],
"frame": "base64_encoded_frame_data"
}
}
],
"pagination": {
"limit": 5,
"offset": 0,
"total": 100
}
}
• Audio Devices API:
• Endpoint: GET /audio/list
• Description: Lists available audio input/output devices.
Sample Response:
[
{
"name": "built-in microphone",
"is_default": true
}
]
• Monitors (Vision) API:
• Endpoint: POST /vision/list
• Description: Lists available monitors/displays.
Sample Response:
[
{
"id": 1,
"name": "built-in display",
"width": 2560,
"height": 1600,
"is_default": true
}
]
• Tags API:
• Endpoint (Add Tags): POST /tags/:content_type/:id
• Endpoint (Remove Tags): DELETE /tags/:content_type/:id
• Description: Manage tags for content items (vision or audio).
Add Tags Request Example:
{
"tags": ["important", "meeting"]
}
Add Tags Response Example:
{
"success": true
}
• Pipes API:
• Endpoints:
• GET /pipes/list (List pipes)
• POST /pipes/download (Download a pipe from a remote URL)
• POST /pipes/enable (Enable a specified pipe)
• POST /pipes/disable (Disable a specified pipe)
• POST /pipes/update (Update a pipe’s configuration)
Download Pipe Request Example:
{
"url": "https://github.com/user/repo/pipe-example"
}
Enable Pipe Request Example:
{
"pipe_id": "pipe-example"
}
• Speakers API:
• List Unnamed Speakers:
• Endpoint: GET /speakers/unnamed
• Query: limit, offset, speaker_ids
• Search Speakers:
• Endpoint: GET /speakers/search
• Query: name (string)
• Update Speaker:
• Endpoint: POST /speakers/update
• Request Body:
{
"id": 123,
"name": "john doe",
"metadata": "{\"role\": \"engineer\"}"
}
• Delete Speaker:
• Endpoint: POST /speakers/delete
• Request Body:
{
"id": 123
}
• Get Similar Speakers:
• Endpoint: GET /speakers/similar
• Query: speaker_id, limit
• Merge Speakers:
• Endpoint: POST /speakers/merge
• Request Body:
{
"speaker_to_keep_id": 123,
"speaker_to_merge_id": 456
}
• Mark as Hallucination:
• Endpoint: POST /speakers/hallucination
• Request Body:
{
"speaker_id": 123
}
• Health API:
• Endpoint: GET /health
• Description: Retrieves system health status.
Sample Response:
{
"status": "healthy",
"last_frame_timestamp": "2024-03-10T12:00:00Z",
"last_audio_timestamp": "2024-03-10T12:00:00Z",
"last_ui_timestamp": "2024-03-10T12:00:00Z",
"frame_status": "ok",
"audio_status": "ok",
"ui_status": "ok",
"message": "all systems functioning normally"
}
• Stream Frames API:
• Endpoint: GET /stream/frames
• Description: Streams frames as Server-Sent Events (SSE).
• Query Parameters: start_time, end_time
Sample Request:
curl "http://localhost:3030/stream/frames?start_time=2024-03-10T12:00:00Z&end_time=2024-03-10T13:00:00Z"
Sample Event Data:
{
"timestamp": "2024-03-10T12:00:00Z",
"devices": [
{
"device_id": "screen-1",
"frame": "base64_encoded_frame_data"
}
]
}
Instructions for Other AI Agents:
• Strictly adhere to the above structure, styles, and naming conventions.
• Always begin text responses with 0xHypr.
• Respect the described architectural and stylistic guidelines.
• Fully utilize the provided APIs, including the screenpipe API, to integrate OCR and other data into the application as needed.
• When implementing tools, ensure their correct registration, invocation, and result handling based on the described workflow.
• Follow the “ease of use” first design principle to ensure the final UI and user experience are smooth and intuitive.
• Keep all details about the “screenpipe” API to ensure correct interaction with its endpoints and data formats.
End of System Prompt