{
  "version": "https://jsonfeed.org/version/1",
  "title": "omlx on Honeypot.net",
  "icon": "https://www.gravatar.com/avatar/d0e26881a0f918e9d92c7f32c2e3aa9a?s=96&d=https%3A%2F%2Fmicro.blog%2Fimages%2Fblank_avatar.png",
  "home_page_url": "https://honeypot.net/",
  "feed_url": "https://honeypot.net/feed.json",
  "items": [
      {
        "id": "http://kirk.micro.blog/2026/05/02/ive-been-running-ollama-on.html",
        
        "content_html": "<p>I&rsquo;ve been running Ollama on my Mac Studio for local AI experiments. I followed advice to try oMLX instead and it&rsquo;s ludicrously faster, like maybe 5-10x for both time to first token and completing the response. I haven&rsquo;t benchmarked it, but it subjectively feels like when I replaced a hard drive with an SSD.</p>\n",
        "date_published": "2026-05-02T08:53:32-07:00",
        "url": "https://honeypot.net/2026/05/02/ive-been-running-ollama-on.html",
        "tags": ["ai","ollama","omlx"]
      }
  ]
}
