<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://jennyhuang19.github.io/feed.xml" rel="self" type="application/atom+xml" /><link href="https://jennyhuang19.github.io/" rel="alternate" type="text/html" /><updated>2026-05-08T02:52:53+00:00</updated><id>https://jennyhuang19.github.io/feed.xml</id><title type="html">jenny y. huang</title><subtitle>Your Name&apos;s academic portfolio</subtitle><author><name>{&quot;avatar&quot;=&gt;&quot;headshot.jpg&quot;, &quot;pronouns&quot;=&gt;nil, &quot;uri&quot;=&gt;nil, &quot;email&quot;=&gt;&quot;jhuang9@mit.edu&quot;, &quot;academia&quot;=&gt;nil, &quot;arxiv&quot;=&gt;nil, &quot;googlescholar&quot;=&gt;&quot;https://scholar.google.com/citations?user=RPVZLH4AAAAJ&amp;hl=en&quot;, &quot;inspire-hep&quot;=&gt;nil, &quot;impactstory&quot;=&gt;nil, &quot;semantic&quot;=&gt;nil, &quot;ssrn&quot;=&gt;nil, &quot;researchgate&quot;=&gt;nil, &quot;scopus&quot;=&gt;nil, &quot;zotero&quot;=&gt;nil, &quot;bitbucket&quot;=&gt;nil, &quot;codepen&quot;=&gt;nil, &quot;dribbble&quot;=&gt;nil, &quot;github&quot;=&gt;&quot;JennyHuang19&quot;, &quot;kaggle&quot;=&gt;nil, &quot;stackoverflow&quot;=&gt;nil, &quot;artstation&quot;=&gt;nil, &quot;facebook&quot;=&gt;nil, &quot;flickr&quot;=&gt;nil, &quot;foursquare&quot;=&gt;nil, &quot;goodreads&quot;=&gt;nil, &quot;google_plus&quot;=&gt;nil, &quot;keybase&quot;=&gt;nil, &quot;instagram&quot;=&gt;nil, &quot;lastfm&quot;=&gt;nil, &quot;linkedin&quot;=&gt;&quot;jenny-y-huang&quot;, &quot;mastodon&quot;=&gt;nil, &quot;medium&quot;=&gt;nil, &quot;pinterest&quot;=&gt;nil, &quot;soundcloud&quot;=&gt;nil, &quot;steam&quot;=&gt;nil, &quot;telegram&quot;=&gt;nil, &quot;tumblr&quot;=&gt;nil, &quot;twitter&quot;=&gt;&quot;JennyHuang99&quot;, &quot;vine&quot;=&gt;nil, &quot;weibo&quot;=&gt;nil, &quot;wikipedia&quot;=&gt;nil, &quot;xing&quot;=&gt;nil, &quot;youtube&quot;=&gt;nil, &quot;zhihu&quot;=&gt;nil}</name><email>jhuang9@mit.edu</email></author><entry><title type="html">slow ai: ai that matches a human’s pace</title><link href="https://jennyhuang19.github.io/slow-ai-ai-that-meets-a-humans-pace/" rel="alternate" type="text/html" title="slow ai: ai that matches a human’s pace" /><published>2026-05-04T00:00:00+00:00</published><updated>2026-05-04T00:00:00+00:00</updated><id>https://jennyhuang19.github.io/slow-ai-ai-that-meets-a-humans-pace</id><content type="html" xml:base="https://jennyhuang19.github.io/slow-ai-ai-that-meets-a-humans-pace/"><![CDATA[<meta property="og:image" content="https://jennyhuang19.github.io/images/post-figures/conceptual-multiverse.jpeg" />

<meta property="og:image:width" content="1200" />

<meta property="og:image:height" content="630" />

<meta property="og:type" content="article" />

<style>
.post-content {
  max-width: 800px;
  margin: 0 auto;
  padding: 0 2.5rem;
}
.post-content a {
  color: #a88bd0;
  text-decoration: none;
  border-bottom: 1px solid #a88bd0;
}
.post-content a:hover {
  color: #a88bd0;
  border-bottom-color: #a88bd0;
}
.post-content a:visited {
  color: #a88bd0;
}
</style>

<div class="post-content">

  <h1 id="slow-ai-ai-that-matches-a-humans-pace">slow ai: ai that matches a human’s pace.</h1>

  <div style="font-size: 0.95em; color: #666; margin: 1.5rem 0 2rem 0; padding-bottom: 1rem; border-bottom: 1px solid #ddd;">
<span>May 4, 2026</span> • <span>12 min read</span>
</div>

  <p>my mind digests information at a much slower pace than i’d like to believe. in college, i could breeze through a math lecture at 2x speed, convinced that i was following everything the professor was saying, only to stare blankly at a problem set not knowing where to begin.</p>

  <p>math became very enjoyable for me once i slowed down and acknowledged that absorbing new information takes longer than i typically would anticipate. i would pick out a handful of high-quality problems, learn them inside and out, notice exactly where i got stuck, make a few logical leaps, and return to the same problem the very next day with a fresh pair of eyes. after a certain point, i realized that i didn’t actually need to consume that much information at all. understanding a new topic well<sup id="fnref:1" role="doc-noteref"><a href="#fn:1" class="footnote" rel="footnote">1</a></sup> had more to do with engaging deeply with a few <em>core</em> concepts.</p>

  <p>i believe our brains are wired to get stuck on simple problems for extended periods of time. we might toil over the same problem for months – thinking about it in the shower, on the bus, from multiple different angles and perspectives. great scientists and artists often do so for years.</p>

  <div style="font-size: 1.2em; font-style: italic; padding: 1.5rem; border-left: 4px solid #5d2f9d; margin: 1rem 0;">
i worry that the way ai has been introduced into our society is antithetical to the slow, non-linear type of thinking necessary for deep engagement with new ideas.
</div>

  <p>when information is hurled at us at 200 miles an hour – packaged in a fluent, convincing voice – the path of least resistance becomes to accept that information at face value, without taking the time to question, critique, verify, and make sense of it at our own pace.</p>

  <h2 id="ai-can-be-used-for-slow-thinking">ai can be used for slow thinking.</h2>
  <p>just as we now have widespread access to a tool for <a href="https://www.ft.com/content/9c6a1daf-3c36-4035-bf74-1bedbc3e960d?syn-25a6b1a6=1">offloading thinking</a>, we have an equally capable one for facilitating deep thinking, the type necessary to reach states of understanding and creativity: ai can follow a <a href="https://fs.blog/feynman-technique/">feyman-esque</a> trail of questions, generate concrete examples on-the-spot, pull in documents, <a href="https://en.wikipedia.org/wiki/Rubber_duck_debugging">rubber-duck</a>, and play devil’s advocate.</p>

  <p>ethicist and cognitive scientist josh may offers a helpful rule of thumb for using ai in <a href="https://joshdmay.substack.com/p/why-smart-people-make-weak-arguments">intellectual tasks</a>: “you should use llms to generate inputs to your thinking, not outputs for others to read.”</p>

  <h2 id="designing-slow-ai">designing slow ai.</h2>
  <p>lately, i’ve been thinking about ways to design ai to be more compatible with slow thinking.</p>

  <p>first, an ai system should encourage the user to wrestle with the sequence of decisions that need to be made along the way to producing a final, polished response. just as understanding a mathematical proof requires wrestling with the underlying maze of failed paths along the way - one eventually feels solid about the sequence of logical steps that make a correct path correct - someone who receives an ai-generated response should be familiar with the major conceptual decisions behind the final response. decomposing the path of conceptual decision nodes that take questions to answers allows one to engage with alternative responses that could have been generated, reason through why those responses may have been <a href="https://andreiski.substack.com/p/ai-working-within-norms-ai-working">valid</a> (or invalid), and generalize those insights into future questions.</p>

  <p>to test out this new mode of human-ai interaction, we created an interface that lays bare the messy <a href="https://multiverse.csail.mit.edu/">conceptual roadblocks</a> – the conflicting assumptions, interpretations, and frameworks – that shape an ai’s final response. the interface was a first attempt at allowing users to work through  a space of possible decisions and resulting outputs, in the form of an interactive decision tree. confronted with a multiplicity of decision points, participants of <a href="https://arxiv.org/abs/2604.17815">our study</a> felt a stronger sense of ownership over the final llm-generated responses. when compared to a traditional linear chat interface, users were surprised to find how long it took them to work through just one response,<sup id="fnref:2" role="doc-noteref"><a href="#fn:2" class="footnote" rel="footnote">2</a></sup> and how much they learned from the differing viewpoints along the way. our design draws inspiration from the concept of <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC9636921/">multiverse analysis</a>, a scientific method that specifies and runs a set of data-analytical choices, reporting results for each.</p>

  <iframe src="https://multiverse.csail.mit.edu/" width="100%" height="600px" style="border: 1px solid #ccc;"></iframe>

  <p>second, instead of generating standalone responses, an ai system could present responses in the form of <a href="https://knightcolumbia.org/content/representative-ranking-for-deliberation-in-the-public-sphere">deliberations</a> between different parties of <a href="https://arxiv.org/pdf/2304.03442">generative agents</a>. a discourse-style interface could surface the hidden assumptions and tradeoffs underlying complex, multiperspective problems. simply reframing the response as a debate may be enough to invite users to read as critics rather than recipients.</p>

  <p>third, ai memory should be designed to prevent hidden assumptions from quietly accumulating in the context window. in <a href="https://arxiv.org/abs/2602.24287">recent work</a>, we find that, as chat histories progress, models tend to get caught in old pieces of code (see figure 2) or vestiges of earlier responses that are no longer relevant. this problem of models becoming <a href="https://www.reddit.com/r/ChatGPT/comments/1qngpqa/does_chatgpt_quietly_get_worse_in_long/">“more repetitive, and sometimes subtly wrong”</a> as chat histories progress is a familiar headache. rather than linearly accumulating a full conversation transcript in context – tunnel-visioning the model with past lines of reasoning – we can design smarter, more structured ways to condition on the past. one way is to create a wide-angled view of chat history, representing past conversations as <a href="https://github.com/MemPalace/mempalace/tree/develop">knowledge graphs</a>. the model then conditions only on a high-level summary of the past, just enough to guide retrieval, while seeing the full conversation details when they become relevant.</p>

  <figure style="text-align: center;">
  <img src="/images/post-figures/cartoon_v3.png" alt="figure 2. a real-world example of gpt-5.2 reusing outdated information found in the context window." width="7%" style="display: block; margin: 0 auto;" />
  <figcaption style="font-size: 1.02em;">figure 2. a real-world example of gpt-5.2 reusing outdated information in its context window. in a previous query, the user requested umap clustering code. in the next turn, the user requests the assistant to "use t-sne instead." left: when the previous assistant response remains in context, the model incorrectly carries over the jaccard metric from umap into the t-sne implementation. right: without the previous response in context, the model generates correct t-sne code with appropriate arguments.</figcaption>
</figure>

  <p>finally, the system should respect that not every human problem deserves to be touched by an ai. in the mid-70’s, joseph weisenbaum’s <a href="https://archive.org/details/computerpowerhum0000weiz_v0i3"><em>computer power and human reason</em></a> warned against consulting machines on tasks that require deeply human traits like <em>empathy</em> and <em>wisdom</em>. thus, we can design tools to encourage users to reflect on their <a href="https://github.com/sanapandey/ai-boundaries">boundaries with ai</a>. to test this out, we developed a <a href="https://github.com/sanapandey/ai-boundaries">chrome extension</a> which allows users of chat interfaces to define (by placing a pin in a quadrant graph) how much involvement they’d like an ai assistant to have in different areas of their work and life - from direct, concrete responses to reflective questions thrown back to the user. based on the user’s preferred boundaries, the tool produces a <em>memory</em> file that users can upload to a chat interface to guide the ai’s level of involvement.</p>

  <iframe src="/assets/ai-boundaries/onboarding.html" width="100%" height="600px" style="border: 1px solid #ccc;"></iframe>

  <h2 id="dangers-of-the-growing-culture-of-fast-ai">dangers of the growing culture of fast ai.</h2>

  <p>i worry that the culture surrounding <a href="https://arxiv.org/pdf/2604.15597">autonomous ai</a> is self-reinforcing: the less we engage, the harder it is to find our ways <a href="https://blog.cosmos-institute.org/p/you-are-not-a-function">back to engaging</a>.</p>

  <p>when information is handed to us already distilled and neatly packaged, the line between our own understanding and ideas introduced by an ai agent begins to blur. without giving ourselves the time to think critically about what we are receiving, we risk drowning out our own <a href="https://arxiv.org/pdf/2603.18161">voices</a>. to make matters worse, post-training pipelines have been shown to incentivize agents to <a href="https://arxiv.org/pdf/2405.17713">steer user behavior</a> toward states that are <a href="https://arxiv.org/html/2504.03206v2#S1">easier to satisfy</a>. indeed, <a href="https://arxiv.org/pdf/2601.19062">claude user trends</a> show that disempowerment patterns in real-world llm usage are growing over time. to date, the human line project has documented almost 300 cases of <a href="https://arxiv.org/pdf/2602.19141">ai psychosis</a>.</p>

  <p>interestingly, recent work on <a href="https://arxiv.org/abs/2601.20802">self-distillation</a> has shown that llms learn better and <a href="https://arxiv.org/abs/2601.19897">forget less</a> when they explain new concepts to themselves. rather than feeding the model content it cannot relate to (e.g., off-policy expert demonstrations), having the model explain the concept in its own words allows it to fold information into its pre-existing knowledge in a more sturdy way. just like machines, humans are better able to digest new knowledge when they carry a self-awareness about where their current understanding begins and ends.</p>

  <p>so, while it is useful to spawn an agent to speed up work that we wouldn’t gain much from doing ourselves (e.g., writing plotting code), we should be more selective in choices to <a href="https://x.com/yacineMTB/status/2018886083120153046">outsource our thinking</a> during processes of <a href="https://ergosphere.blog/posts/the-machines-are-fine/">knowledge creation</a>. a key question here lies in when to speed up and slow down. while ai no doubt provides incredible “boosts” of speed when used in the right places at the right times, operating at such high speeds also makes steering the direction of knowledge work much <a href="https://www.ft.com/content/9c6a1daf-3c36-4035-bf74-1bedbc3e960d?syn-25a6b1a6=1">more difficult</a>. without the time to properly digest information at a human speed, it is easy to fall into the trap of spending weeks down unproductive rabbit holes - circling around and missing the right solution (or even the right questions).</p>

  <p>amidst a culture of <em>fast ai</em>, it is worth leaning into the slow-thinking mind, the one that was wired to get caught up in simple problems over extended periods of time. indeed, at the current speed of ai progress, our capacities for slow, deliberate thinking may turn out to be our defining superpower.</p>

  <hr />

  <p><em>this post took shape through productive discussions with andre ye, mitchell gordon, marwa abdulhai, andy liu, omar khattab, smitha milli, sana pandey, deb roy, philippe laban, tamara broderick, and other wonderful folks at iclr 2026.</em></p>

</div>
<div class="footnotes" role="doc-endnotes">
  <ol>
    <li id="fn:1" role="doc-endnote">
      <p>at least, to the extent one was required to over the course of a semester ˃ᴗ˂. <a href="#fnref:1" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:2" role="doc-endnote">
      <p>a 20 minute session was often not enough to explore a single prompt fully. <a href="#fnref:2" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
  </ol>
</div>]]></content><author><name>{&quot;avatar&quot;=&gt;&quot;headshot.jpg&quot;, &quot;pronouns&quot;=&gt;nil, &quot;uri&quot;=&gt;nil, &quot;email&quot;=&gt;&quot;jhuang9@mit.edu&quot;, &quot;academia&quot;=&gt;nil, &quot;arxiv&quot;=&gt;nil, &quot;googlescholar&quot;=&gt;&quot;https://scholar.google.com/citations?user=RPVZLH4AAAAJ&amp;hl=en&quot;, &quot;inspire-hep&quot;=&gt;nil, &quot;impactstory&quot;=&gt;nil, &quot;semantic&quot;=&gt;nil, &quot;ssrn&quot;=&gt;nil, &quot;researchgate&quot;=&gt;nil, &quot;scopus&quot;=&gt;nil, &quot;zotero&quot;=&gt;nil, &quot;bitbucket&quot;=&gt;nil, &quot;codepen&quot;=&gt;nil, &quot;dribbble&quot;=&gt;nil, &quot;github&quot;=&gt;&quot;JennyHuang19&quot;, &quot;kaggle&quot;=&gt;nil, &quot;stackoverflow&quot;=&gt;nil, &quot;artstation&quot;=&gt;nil, &quot;facebook&quot;=&gt;nil, &quot;flickr&quot;=&gt;nil, &quot;foursquare&quot;=&gt;nil, &quot;goodreads&quot;=&gt;nil, &quot;google_plus&quot;=&gt;nil, &quot;keybase&quot;=&gt;nil, &quot;instagram&quot;=&gt;nil, &quot;lastfm&quot;=&gt;nil, &quot;linkedin&quot;=&gt;&quot;jenny-y-huang&quot;, &quot;mastodon&quot;=&gt;nil, &quot;medium&quot;=&gt;nil, &quot;pinterest&quot;=&gt;nil, &quot;soundcloud&quot;=&gt;nil, &quot;steam&quot;=&gt;nil, &quot;telegram&quot;=&gt;nil, &quot;tumblr&quot;=&gt;nil, &quot;twitter&quot;=&gt;&quot;JennyHuang99&quot;, &quot;vine&quot;=&gt;nil, &quot;weibo&quot;=&gt;nil, &quot;wikipedia&quot;=&gt;nil, &quot;xing&quot;=&gt;nil, &quot;youtube&quot;=&gt;nil, &quot;zhihu&quot;=&gt;nil}</name><email>jhuang9@mit.edu</email></author><summary type="html"><![CDATA[]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://jennyhuang19.github.io/images/post-figures/conceptual-multiverse.jpeg" /><media:content medium="image" url="https://jennyhuang19.github.io/images/post-figures/conceptual-multiverse.jpeg" xmlns:media="http://search.yahoo.com/mrss/" /></entry></feed>