Desktop App: Recording & Analyzing Meetings
Record any meeting — Zoom, Google Meet, Teams, or in-person — and get evidence, signals, and a full transcript automatically, right as the conversation happens.
The UpSight desktop app runs on macOS and Windows. It sits in your menu bar / system tray and automatically detects when you join a meeting.
Go to getupsight.com/download and install the app for your OS.
Click the UpSight icon in your menu bar / tray and choose Sign In. Your browser will open to complete authentication — same account as the web app.
On first launch macOS or Windows will ask for microphone permission. Allow it — the app needs mic access to record your side of the conversation.
When the app detects a Zoom, Google Meet, or Teams call, the floating panel slides in. You can also open it manually from the tray icon.
macOS vs. Windows
On macOS, the app captures system audio directly — all participants are recorded automatically with no extra setup. On Windows, use the cloud bot instead: it joins your meeting as a participant and records every speaker cleanly. See the Cloud Bot section below.
The floating panel overlays your screen during a meeting. It has five tabs — each giving you a different live view into the conversation.
Shows the research questions you set up for this project, so you never lose track of what you're trying to learn. Questions check off as they're covered. Configure them in the web app under project settings.
Type a quick note any time during the call — it's saved and linked to this meeting. Use the input at the bottom of the panel. Toggle between Note and Task mode with the icon button or ⌘⇧T.
AI-suggested follow-up actions appear here as the conversation progresses. Review and accept them individually or hit Accept All. Accepted tasks sync to the web app.
Real-time evidence cards — goals, pain points, context, and workflow moments extracted from the live transcript as they happen. Each card is saved to your project automatically.
Speaker-attributed live transcript, word by word. Useful for double-checking a quote or catching something you missed. The full transcript is also available in the web app after the meeting ends.
Panel modes
The panel opens automatically when a meeting is detected. If it doesn't appear, click the UpSight icon in your menu bar / tray.
Press the red circle button at the top of the panel. It pulses amber while connecting, then turns solid red with a running timer once recording is confirmed. The Signals and Transcript tabs will start populating within seconds.
The panel runs in the background. Glance at the Signals tab to see evidence cards appearing in real-time. Use the input bar at the bottom to jot notes or create tasks without breaking your flow.
Click the Record button again to stop. The app finalizes the transcript, uploads the audio, and triggers full AI processing. You'll see the completed interview in the web app within a few minutes.
Tip: minimize the panel to stay focused
Once recording is running you can minimize the panel — it shrinks to a small indicator that pulses red so you know it's active. Click it to bring the panel back.
Windows users should use the cloud bot instead of local mic recording. It joins your Zoom or Google Meet call as a named participant ("UpSight") and records every speaker cleanly. macOS users don't need this — system audio capture handles all participants automatically.
The cloud bot toggle appears in the panel header once a compatible meeting is detected. Click it to invite the bot into your call.
You'll see a join request from "UpSight" — admit it like any other participant. Once admitted, the bot records all audio and the panel shows its active recording state in red.
When the meeting ends, the bot exits and the recording is finalized just like a local recording.
Heads up: participants see the bot
The cloud bot joins as a visible participant. It's good practice to let attendees know the meeting is being recorded — most video platforms require this by default anyway.
Once you stop recording, UpSight processes the conversation in the background. You don't need to do anything — open the web app a few minutes later and it's all there.
What gets created automatically:
Where to find it in the web app:
Typical processing time: 3–8 minutes
A 60-minute conversation usually finishes processing in 5–8 minutes. Shorter calls are faster. If you don't see the conversation in the web app after 10 minutes, refresh the page.
The Guidance tab shows your project's research questions during the call. If you haven't set them up yet, do it in the web app under Project Settings before your first meeting.
Better audio = better transcription = better evidence extraction. Built-in laptop mics work but a headset noticeably improves accuracy.
If you say "Sarah, can you tell us more about..." at the start, the AI can better attribute quotes to the right people. Speaker names you say explicitly are captured and used in the evidence cards.
Use the panel's note input during the call — it takes seconds and ensures you don't lose the thought. Notes become searchable evidence attached to this conversation.
On Windows, the cloud bot is the recommended way to record remote meetings — it captures every speaker cleanly. macOS captures system audio directly, so no bot is needed.