PDFs get text-extracted and summarised. Images get vision-described. Videos get transcribed with Whisper plus keyframes analysed. Everything becomes a searchable neuron — no configuration.
Drop files from any page. A global overlay catches them, AI analyses, neurons appear.
PDFs, DOCX, images (PNG/JPG/WEBP/HEIC), video (MP4/MOV), audio, spreadsheets, code files — each processed by the right pipeline.
Drop a whiteboard photo or a screenshot and the AI reads the text + describes the content. Search for what's in the image, not just filenames.
Video gets audio-transcribed and keyframes vision-analysed. Meeting recordings become searchable transcripts with scene descriptions.
Drag from Finder to any Cortex page — the screen-wide overlay intercepts. No more "find the upload button" hunt.
Drag from anywhere. The global overlay lights up in violet across the screen.
The right pipeline runs — OCR, vision, Whisper, or text extraction — in a background worker.
Within seconds the file appears in your brain, fully searchable by content.
Still curious? Write to us.
Private beta. Limited spots. Redeem your code to jump in — or join the waitlist at the bottom of the page.