Team Briefing

Smart Views

A self-maintaining dashboard for the topics you return to constantly. You capture; it organizes.

Our goal: ship a conversational, agentic view builder — a system where you give Mem a short title like "Mem Teams initiative" or "Acme Co deal", and an agent finds relevant context in your notes, composes an appropriate layout from a library of UI building blocks, synthesizes the content, keeps itself up to date as new material lands, and lets you reshape it just by talking to it.

How to read this: top sections are the briefing; lower sections add depth. Smart Views is the most conceptually open project of the three, so we've made opinionated calls on scope below so we can get moving on Day 1. The "The one question we're not deferring" section is the most important thing in this doc — read it carefully.


TL;DR

Why this matters

Mem's positioning is "Your AI Thought Partner" — a system that remembers, organizes, and brings back. Smart Views is the clearest instantiation of that promise for the user's ongoing work. Collections today are bags of content; a Smart View is a living, synthesized surface that maintains itself.

For busy operators juggling multiple recurring threads (customers, deals, projects, initiatives), the job Smart Views does is: "open this thing and immediately see where I am, what's new, and what's next — without having built the dashboard by hand." That's what power users try to do today in notes apps and fail; that's what we make automatic.


What we're actually building

This framing matters enough to call out explicitly so we don't drift toward the wrong thing.

A Smart View is not a template with fixed sections. It is an agent-composed surface:

  1. Input: a short user intent — often just a title, optionally a sentence of description.
  2. Context gathering: the agent searches the user's Mem for related content.
  3. Composition: the agent chooses which UI building blocks to use (a timeline? a table? a prose summary? an action card?) and how to arrange them based on what it found.
  4. Population: the agent fills those blocks with synthesized content grounded in the user's source material.
  5. Maintenance: as new source material lands in Mem, the view decides whether and how to incorporate it.
  6. Iteration: the user tells the view what to change conversationally; structural edits are persisted and survive regeneration.

The UI building-block library is the piece of infrastructure we need to get right. Things like:

The agent is the one choosing which blocks to use and how to compose them. We define the blocks; the agent does the UI design.


Core workflows (the hero paths)

1. Create a view from almost nothing

You type "Mem Teams initiative" into a new Smart View. Mem goes and reads your notes, pulls context from everything relevant — past meeting notes, prospect list, recent agent chat threads — and composes a view. You open it: a summary of the initiative at the top, a table of active prospects with status, a list of recent activity, a section of open questions. You didn't specify any of that structure. The agent chose it.

The magic starting point. Minimum user input → useful view.

2. Capture source material, view auto-incorporates

You finish a sales call with Acme Co. You dictate a voice note. It lands in Mem. A few minutes later, the Mem Teams Smart View has updated: the row for Acme Co in the prospect table now shows the new status, the "Recent activity" section lists the call with a summary, and a suggestion card appears at the bottom: "Draft the follow-up email Sarah asked for?"

You never told Mem this note belonged to the Mem Teams view. The view noticed and incorporated it.

This is the core loop. Capture is the user's only job; the views reshape themselves around it. Some incorporation is silent (obvious-fit content added directly); some is surfaced as a suggested change that the user can accept or reject. The agent uses discretion.

3. Open the view as the home for the topic

You tap the Mem Teams Smart View. You see the current state of your initiative at a glance — the shape you didn't design but that matches how you think about it. At the bottom is the raw source material, browsable.

Whatever sections the view ends up with — current state, prospect table, next-step suggestions, source material — those aren't hardcoded. They're the sections the agent tends to produce for this kind of topic given this kind of content. A different topic with different content would produce a different layout.

4. Reshape the view conversationally

You're looking at the Mem Teams Smart View. You type in a chat attached to the view: "Add a section at the top summarizing weekly progress." Mem adds it. Over the next week, as you capture new material, that section updates.

Later: "Move the prospect table above the activity feed." Done. "Actually, group the prospect table by deal stage." Done.

This is the crux workflow. Users don't customize via settings panels — they tell the view what they want. Those structural instructions are persisted as part of the view's ongoing state (think: a living prompt + edit history the view carries with it), so the layout and shape they've coerced it into survives regeneration as new data lands.

5. Trace synthesis back to source

A line in the view reads: "Acme Co is interested in a Q3 pilot." You click it. Mem shows you the source — the meeting note from last Tuesday where Sarah said exactly that.

Provenance is essential. The user has to be able to audit any synthesized claim back to the source. This is what makes the view trustworthy vs. the view being a black box that might be hallucinating.


Our scoping bet for the week

Smart Views is conceptually big. The bet:

Build the system (title → agent-composed view → auto-incorporation → conversational iteration) end-to-end. Pick one scenario we can all relate to and demo from it.

The scenario is the proof point; the system is the work. What we're not doing:

What we are doing:


Scenario selection — our one big open call for Day 1

We need a scenario that:

Strong candidates:

We'll commit by end of Day 1.


Milestones

M0 — Seed the data

Gather real Mem content for the chosen scenario — enough that the agent has something substantive to compose a view from.

M1 — Title → view

M2 — Automatic incorporation of new source material

M3 — Conversational iteration

M4 — Provenance + polish

Stretch

Not in scope this week


The one question we're not deferring: conversational iteration & structural memory

Most of the concept-level ambiguity in Smart Views (typed vs. untyped, tree vs. graph, relationship to Collections) we're deferring for the week. One question we are NOT deferring, because it's the UX and data-model crux:

How do we persist the structural intent a user expresses conversationally, so that it survives regeneration as new content lands?

Concretely: the user says "add a weekly-progress section at the top." That instruction has to:

Our working model (subject to the team refining during the week): each Smart View carries an edit log of user-expressed structural intent. On regeneration, the agent takes the view's title + context + edit log as its prompt. The edit log preserves the spirit of what the user wanted, even when the surface content underneath changes.

This applies to direct edits too. If the user hand-edits a synthesized section, that edit needs to be logged and respected by future regenerations — the spirit of the edit, not necessarily the literal text. This is how we square "views are self-maintaining" with "users can shape them."

This is the piece we most need to figure out. Everything else is an execution question; this is a design question.


More example vignettes


Framing questions we are deferring for this week

Each of these deserves a real answer eventually. None of them block a great hack-week demo.

  1. Typed vs. untyped vs. object-native. Is "Smart View" one generic primitive, or do we have named types (Project View, Customer View), or do we drop the word entirely and give Mem first-class objects like "Projects" and "Customers"? Our bet (generic + LLM-composed) works under any of these framings.
  2. Tree vs. graph. Can Smart Views contain other Smart Views? For the week, scope flat. Note learnings as we go.
  3. Relationship to Collections. Is a Smart View the evolution of a Collection or a new thing layered on top? Don't resolve; just avoid conflicts with existing Collection semantics in the ingestion path.

Key decisions to make on Day 1

  1. Scenario. Commit to one. Real content from a real Mem. Most likely Mem Teams initiative or a specific customer within it; Trip is off the table.
  2. The building-block library. Pick the starting set of UI blocks the agent can compose from (section + prose, table, list, timeline, key-value block, suggestion card). Two hours of whiteboarding; don't over-design. The library will grow.
  3. How we represent the view's structural memory. The edit-log model above is a working draft. Pressure-test it Day 1. This is the piece that most needs a coherent answer early because downstream work depends on it.
  4. Incorporation discretion policy. When does an incoming note get directly incorporated vs. surfaced as a suggestion? A simple rule (confidence-threshold, or section-match-quality) for the week; refined as we see it in practice.
  5. Regeneration cadence. Pick the simplest sensible default (e.g., debounced 5 minutes after a relevant note changes; manual-refresh button available). Don't over-invest; it's tuning, not architecture.
  6. One proactive suggestion type for the demo. Pick one flavor of agent-suggested change (e.g., "add this to the prospect table") and drive it end-to-end, rather than building a generalized suggestion framework.

What "done" looks like for the week

Minimum demo:

Strong demo:

Wow demo:


Things Smart Views are not (for this hack week)