What this is: the shared context for the week. Read this before diving into your team's dossier. Everyone should leave this doc able to explain, in a sentence or two, what each of the other teams is building and why.
The week
We're spending a week in NYC, split into three teams, each hacking on one concept. By Friday, each team will have a demo — something real, running, and usable — that illustrates the idea.
The three projects aren't random: they're complementary facets of the same bet about where Mem is going — an ambient, conversational, always-aware AI thought partner that works alongside you through your whole day, not just when you open the app.
Each product gives the user a different kind of access to that partner. Together, they sketch the product we're becoming.
The three projects, at a glance
1. Floating Mem
The pitch: an always-visible tomato-colored Mem bubble in the bottom-right of the desktop. Persistent, unobtrusive, one gesture away. Mac, built on our existing Electron app, integrating through Mem Agent.
Core workflows:
- Push-to-talk capture. Hold a global hotkey from anywhere, speak, release. Voice note flows through Mem Agent and lands in Mem. You never leave what you were doing.
- Expand to chat. Click the bubble to open an ongoing chat thread with Mem, anchored to the corner. Think: Facebook's old floating-heads chat, but for your AI thought partner.
- Orchestrate the main app. "Open my Europe trip plan" → the main Mem app comes forward with that content loaded. Capture and recall use the same gesture.
- Auto-expand. When a response is user-facing information (a looked-up number, a pulled-up fact), the chat panel opens on its own so the answer is visible. LLM-judged via a tool call.
- Peek. When Mem wants to surface something — a reminder, a heads-up, a proactive nudge — it peeks from the bubble. Mem Agent gets a tool to send peeks the same way it sends Slack messages today.
What the demo looks like: minimum is push-to-talk from any app landing content in Mem, with a clickable chat thread. Richer versions layer in auto-expand, app orchestration, a scoped Heads Up trigger (screen-aware recall for a specific context), and peeks as Mem Agent's proactive channel. The most ambitious adds screenshot capture as an extra path.
Why it matters: today Mem is a place you go. Floating Mem makes Mem a place you are in — always-on, low-friction, ambient. It's the most visible product surface we have in this set and the clearest demo of the "Mem is always with you" pitch. It's also plausibly the form factor Heads Up evolves into.
2. Huddle Mode
The pitch: a real-time voice conversation with Mem — the up-level of Voice Mode from "dictate inside a single note" to "talk to your whole Mem." Like calling a human assistant who already knows your world.
Core workflows:
- Brain-dump capture without blocking. You talk, Mem reacts to the idea conversationally ("Oh, that's a good one — does that play into the pilot Sarah was pushing for?"), and silently queues the dump as work. The structured note lands in Mem after the call.
- Recall in the moment. "What's my KTN number?" "What were my follow-ups with Sarah?" "What should I focus on today?" Mem answers fluently using the same search-and-retrieval toolkit Mem Agent uses today, plus internet search.
- Queued multi-step work, executed after the call. "Consolidate all my Europe trip notes into one plan." Mem says "I'll pull that together," keeps the conversation flowing, and executes the heavier work after the huddle ends — then messages you when it's done. This async-execution pattern is the non-obvious crown jewel. Without it, Huddle is just Mem Chat with a microphone.
- Proactive dovetailing. When you raise a topic, Mem surfaces something related and useful from your Mem — "by the way, didn't you say you'd send Sarah the SOC 2 report?" Different from generic time-based reminders; this rides whatever you're actually talking about.
- Mem calls us. Top-tier demo inversion — Mem Agent decides it has enough stacked up to warrant a huddle, pushes a notification, and opens the conversation with its own agenda when we pick up.
The team's bet: form factor (iOS vs. web vs. desktop) is downstream of which realtime voice SDK we pick on Day 1. iOS is the natural hero for the ICP if the SDK supports it cleanly; otherwise web/desktop first with portable architecture.
What the demo looks like: a live voice call with Mem. Five or six scripted interactions covering capture, queued work, recall, and a proactive dovetail. After hanging up, Mem messages us with the results of the queued work (structured notes, drafted emails, etc.). If we land the Bonus milestone, the top-tier moment is Mem calling us with an agenda.
Why it matters: voice is how busy operators actually want to work — in the car, on a walk, between meetings, AirPods in. Voice Mode today is constrained to a single note; Huddle up-levels voice to the entire Mem. Multimodal is how we meet the ICP where they are.
3. Smart Views
The pitch: a conversational, agentic view builder. Type a short title ("Mem Teams initiative", "Acme Co deal"); an agent reads your Mem, composes a view from a library of UI building blocks, synthesizes the content, keeps it up to date as new material lands, and lets you reshape it just by talking to it. A self-maintaining dashboard, built for you.
Core workflows:
- Title → view. Minimal input in, useful view out. The agent chooses the layout based on what it finds in your Mem; the user doesn't design the layout themselves.
- Automatic incorporation. Dictate a sales-call note; the relevant Smart View updates itself within minutes. No tagging, no filing. Obvious-fit content is incorporated directly; ambiguous cases surface as suggestion cards.
- Conversational iteration. "Add a weekly-progress section at the top." "Group the prospect table by deal stage." Structural edits are applied immediately and persisted as part of the view's ongoing state, so they survive regeneration as new content lands.
- Provenance. Any synthesized statement traces back to the source note that informed it.
The team's bet: build the system (title → agent-composed view → auto-incorporation → conversational iteration) end-to-end. Pick one real scenario everyone can relate to as the demo vehicle — likely a real work initiative or customer. The scenario is the proof point; the system is the work. Not a templated layout; an LLM-composed one.
What the demo looks like: type a short title; a useful view composes itself from existing Mem content. Capture a new note; watch the view incorporate it automatically. Tell the view to reshape itself ("add a section for X, move Y above Z") and watch the reshaping stick across regeneration. Click a synthesized claim and see the source note.
Why it matters: Collections today are bags of content; Smart Views are collections with a brain. For users juggling recurring threads (customers, deals, projects, initiatives), Smart Views become the living front page for each — the dashboard power users build by hand today, built for them.
How the three fit together
Picture a day in the life of the product we're pointing at.
A small-business operator gets in the car and starts their commute. They tap to start a Huddle and ask what's on their plate today — Mem runs through the calendar and pending commitments and gives a crisp brief. They mention an idea for a new customer-onboarding flow; Mem reacts to it, plays it back a little, asks a clarifying question, and silently queues the work of turning it into a structured note.
They arrive at their desk. They open the Smart View for the initiative they're focused on this quarter. The consolidated view Mem put together during the huddle is already there — the new onboarding idea integrated into the right section, source material traceable back to the raw dump. They tell the view to add a section for open blockers; it complies and remembers.
They jump into a meeting. After the meeting, between blocks, they push-to-talk to Floating Mem: "Action items from that call — send Sarah the SOC 2 report, update the pricing slide, and follow up with Ben on integration scoping." The bubble pulses. Mem folds those items into the meeting note they were working on — not a new note — and queues reminders to nudge them later in the day.
That's three surfaces, one product:
- Huddle Mode is the high-bandwidth conversation surface — voice, in motion, with full recall.
- Smart Views are the destinations — where topics live, synthesized and actionable.
- Floating Mem is the ambient quick channel at the desk — always there, one gesture away.
For hack week, each team is focused on their own project. But the three are deliberately complementary, and good demos will feel consistent with each other even though no one is doing formal integration work. If someone on the Huddle team watches a Floating Mem demo and says "oh, that's the bubble I'd want to launch a huddle from," good — that's the point. We don't need to wire the seams; we just need the demos to hint at them.
Boil the ocean
One framing for this week. Earlier this year, Garry Tan (YC's CEO) wrote Boil the Ocean — a pushback on the long-standing startup advice of "don't boil the ocean." His argument: AI-assisted development has changed what's actually feasible. Teams are shipping comprehensive products in timescales that would have been laughable eighteen months ago. The old advice to scope tight made sense when tokens were expensive and keystrokes were the bottleneck. Neither is true anymore.
For this week: don't build a 1.05× version of the idea. Build toward the 10× version. Crank through tokens. Build comprehensive demos. Put the whole concept on screen even if the code under some of it is duct tape. The constraint isn't "what can five engineers build in a week" anymore — it's "what can five engineers plus AI build in a week," and the answer is a lot more than we think.
That doesn't mean perfection; it means ambition. See the "winning the week" bar below for what this looks like in practice.
What winning the week looks like
For each team: a demo that makes the heart of the concept land. Someone watching the demo, with no context, should go "oh, I get it — and I haven't seen anything like this before." That's the bar: a purple cow — something that looks genuinely different from any product on the market. The individual dossier for each team names what minimum / strong / wow versions look like.
For the group: three demos that, shown back-to-back, tell one story — this is where Mem is going. Each team's demo is most valuable as a piece of that larger narrative, not as a standalone artifact.
For the week overall: we learn. We should end Friday knowing what's hard vs. easy about each concept, which open questions actually matter, and where the real product bets need to go next. A short list of "things we learned that should inform the real product decision" is almost as valuable an output as the demos themselves.
Ground rules
- Real data beats fake data. If your demo needs a certain kind of content, seed from a real person's Mem (with their permission). Synthetic demos look synthetic.
- Scope on Day 1, then stop debating. Each dossier has a "key decisions" section. Make those calls fast; don't relitigate them mid-week.
- Rough edges are expected, and totally fine. Don't polish what doesn't need polish yet — just make sure the thing actually works. No movie-magic demos.
- Ambition over polish. There's more to learn from landing the heart of an ambitious idea roughly than from polishing a thinner version. If you have to choose, aim big.
- Cross-team coordination is welcome but optional. If Floating Mem wants to launch a Huddle from the bubble, talk. But no team's demo should depend on another team's shipping.
- Demos on Friday. All three teams show. 10–15 minutes each, live, real.
Have fun. Build the thing.