<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>AI on Nelson Figueroa</title>
    <link>https://nelson.cloud/categories/ai/</link>
    <description>Recent content in AI on Nelson Figueroa</description>
    
    <language>en</language>
    <lastBuildDate>Sat, 04 Apr 2026 14:40:21 -0700</lastBuildDate>
    <atom:link href="https://nelson.cloud/categories/ai/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>I Created My First AI-assisted Pull Request and I Feel Like a Fraud</title>
      <link>https://nelson.cloud/i-created-my-first-ai-assisted-pull-request-and-i-feel-like-a-fraud/?ref=rss</link>
      <pubDate>Mon, 23 Mar 2026 00:00:00 +0000</pubDate>
      <guid>https://nelson.cloud/i-created-my-first-ai-assisted-pull-request-and-i-feel-like-a-fraud/?ref=rss</guid>
      <description>I used AI to contribute to an open source project. The code was merged. I didn&amp;rsquo;t learn anything and I felt bad as an engineer.</description><content:encoded><![CDATA[<p>I created 

<a href="https://github.com/alecthomas/chroma/pull/1228" target="_blank" rel="noopener">my first AI pull request</a> for 

<a href="https://github.com/alecthomas/chroma" target="_blank" rel="noopener">Chroma</a>, which is the default syntax highlighter used in 

<a href="https://github.com/gohugoio/hugo" target="_blank" rel="noopener">Hugo</a>.</p>
<p>I used Claude Code for that PR. I didn&rsquo;t learn anything. I felt like I was flinging slop over the wall to an open-source maintainer. I felt like a fraud and my impostor syndrome got worse.</p>
<p>But here&rsquo;s the thing, I still contributed something of value. It’s something I have been wanting in Hugo for a long time to syntax highlight 

<a href="https://github.com/ruby/erb" target="_blank" rel="noopener">ERB</a> snippets in my posts. It was approved and merged by the maintainer (thanks for dealing with my slop, 

<a href="https://github.com/alecthomas" target="_blank" rel="noopener">Alec</a>).</p>
<p>It just feels odd. I know there are many people that are excited about this new era of writing code. But to me this has sucked out all of the fun. I have never felt like a bigger fraud in this field. I&rsquo;ve always thought that I&rsquo;m privileged that I get to have fun at all in my line of work. As Ori Bernstein says: 

<a href="https://orib.dev/nofun.html" target="_blank" rel="noopener">&ldquo;using LLMs to write code is as fun as hiring a taskrabbit to solve my jigsaw puzzles&rdquo;</a>.</p>
<p>But then again, I know that realistically I would not have the mental capacity or skill to create a pull request like that without AI tooling. My brain is already fried from work on most days. I don&rsquo;t think I would have been able to learn the codebase and get enough context to make that PR all by myself. It&rsquo;s a lot. I just wanted some ERB syntax highlighting for my little blog.</p>
<p>Even at work I&rsquo;ve used Claude Code and other AI tooling to deliver fixes and improvements that have real customer impact. But no matter how big the impact, I feel empty. I agree with Xe Iaso in the first sentence of their post: 

<a href="https://xeiaso.net/blog/2026/ai-abstraction/" target="_blank" rel="noopener">&ldquo;Whenever I have Claude do something for me, I feel nothing about the results&rdquo;</a>.</p>
<p>Now that using AI is a normal expectation at work and how I&rsquo;m evaluated in performance reviews, I suspect that this fraud feeling will only grow. The industry as a whole is incentivizing delivering code/features/fixes at a quick pace even if it&rsquo;s all just slop.</p>
<p>I keep thinking a lot about how I perhaps tied my identity too much to my career. I&rsquo;m not the greatest engineer, but I&rsquo;ve always worked hard to deliver good work and learn as much as possible. I care a lot about understanding underlying systems as much as possible. I care about the craftsmanship of my code (to the best of my abilities). Unlike me, AI tools don&rsquo;t care about any of these values.</p>
<p>At the end of the day, the shareholders care about delivering features, gaining customers, and making money. They don&rsquo;t care how software is built.</p>
<p>I don&rsquo;t know what to make of this.</p>
<hr>
<p>Discussion on 

<a href="https://news.ycombinator.com/item?id=47497679" target="_blank" rel="noopener">Hacker News</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Local Text Summarization With Ollama and Python Is Just String Manipulation</title>
      <link>https://nelson.cloud/local-text-summarization-with-ollama-and-python-is-just-string-manipulation/?ref=rss</link>
      <pubDate>Sun, 24 Aug 2025 00:00:00 +0000</pubDate>
      <guid>https://nelson.cloud/local-text-summarization-with-ollama-and-python-is-just-string-manipulation/?ref=rss</guid>
      <description>Generate a string with Python, pass it into Ollama, and you get a string in return. That&amp;rsquo;s it.</description><content:encoded><![CDATA[<p>I&rsquo;ve used LLMs before but through an interface (i.e. 

<a href="https://chatgpt.com/" target="_blank" rel="noopener">ChatGPT</a>, 

<a href="https://gemini.google.com/app" target="_blank" rel="noopener">Gemini</a>, etc) but when I was trying to run a LLM locally I was overthinking how it worked.</p>
<p>Basically, it comes down to this: You pass in a string, and you get a string in return. That&rsquo;s it.</p>
<p>So if we want to run a LLM locally using Python to summarize files, we build strings with Python and pass them into Ollama. If you want to read in files, open them in Python and concatenate the text with your prompt string. Then pass in the prompt string to Ollama.</p>
<p>Python is just a bridge between you and Ollama.</p>
<p>I&rsquo;ve included some basic examples. The examples assume you have Ollama installed locally.</p>
<h2 id="reading-a-single-file-into-ollama">Reading a Single File Into Ollama</h2>
<p>This is straightforward. Open a file and concatenate the text with a prompt which gets passed into Ollama:</p>
<div class="highlight"><div class="chroma">
<table class="lntable"><tr><td class="lntd">
<pre tabindex="0" class="chroma"><code><span class="lnt"> 1
</span><span class="lnt"> 2
</span><span class="lnt"> 3
</span><span class="lnt"> 4
</span><span class="lnt"> 5
</span><span class="lnt"> 6
</span><span class="lnt"> 7
</span><span class="lnt"> 8
</span><span class="lnt"> 9
</span><span class="lnt">10
</span><span class="lnt">11
</span><span class="lnt">12
</span><span class="lnt">13
</span><span class="lnt">14
</span><span class="lnt">15
</span><span class="lnt">16
</span><span class="lnt">17
</span><span class="lnt">18
</span><span class="lnt">19
</span><span class="lnt">20
</span></code></pre></td>
<td class="lntd">
<pre tabindex="0" class="chroma"><code class="language-python" data-lang="python"><span class="line"><span class="cl"><span class="kn">import</span> <span class="nn">ollama</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl"><span class="c1"># open a single file</span>
</span></span><span class="line"><span class="cl"><span class="n">file</span> <span class="o">=</span> <span class="nb">open</span><span class="p">(</span><span class="s2">&#34;path/to/file.txt&#34;</span><span class="p">)</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl"><span class="c1"># read it and concatenate to the prompt</span>
</span></span><span class="line"><span class="cl"><span class="n">prompt</span> <span class="o">=</span> <span class="sa">f</span><span class="s1">&#39;Can you summarize this file for me? </span><span class="si">{</span><span class="n">file</span><span class="o">.</span><span class="n">read</span><span class="p">()</span><span class="si">}</span><span class="s1">&#39;</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl"><span class="c1"># pass in the prompt to Ollama</span>
</span></span><span class="line"><span class="cl"><span class="n">response</span> <span class="o">=</span> <span class="n">ollama</span><span class="o">.</span><span class="n">chat</span><span class="p">(</span>
</span></span><span class="line"><span class="cl">    <span class="n">model</span><span class="o">=</span><span class="s1">&#39;gpt-oss:20b&#39;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">    <span class="n">messages</span><span class="o">=</span><span class="p">[</span>
</span></span><span class="line"><span class="cl">        <span class="p">{</span>
</span></span><span class="line"><span class="cl">            <span class="s1">&#39;role&#39;</span><span class="p">:</span> <span class="s1">&#39;user&#39;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">            <span class="s1">&#39;content&#39;</span><span class="p">:</span> <span class="n">prompt</span>
</span></span><span class="line"><span class="cl">        <span class="p">}</span>
</span></span><span class="line"><span class="cl">    <span class="p">]</span>
</span></span><span class="line"><span class="cl"><span class="p">)</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl"><span class="nb">print</span><span class="p">(</span><span class="n">response</span><span class="o">.</span><span class="n">message</span><span class="p">[</span><span class="s1">&#39;content&#39;</span><span class="p">])</span>
</span></span></code></pre></td></tr></table>
</blockquote><h2 id="reading-multiple-files-into-ollama">Reading Multiple Files into Ollama</h2>
<p>If you want to pass in multiple files in one prompt, you have to read and concatenate the files into a string, which you then concatenate into the prompt itself.</p>
<div class="highlight"><div class="chroma">
<table class="lntable"><tr><td class="lntd">
<pre tabindex="0" class="chroma"><code><span class="lnt"> 1
</span><span class="lnt"> 2
</span><span class="lnt"> 3
</span><span class="lnt"> 4
</span><span class="lnt"> 5
</span><span class="lnt"> 6
</span><span class="lnt"> 7
</span><span class="lnt"> 8
</span><span class="lnt"> 9
</span><span class="lnt">10
</span><span class="lnt">11
</span><span class="lnt">12
</span><span class="lnt">13
</span><span class="lnt">14
</span><span class="lnt">15
</span><span class="lnt">16
</span><span class="lnt">17
</span><span class="lnt">18
</span><span class="lnt">19
</span><span class="lnt">20
</span><span class="lnt">21
</span><span class="lnt">22
</span><span class="lnt">23
</span><span class="lnt">24
</span></code></pre></td>
<td class="lntd">
<pre tabindex="0" class="chroma"><code class="language-python" data-lang="python"><span class="line"><span class="cl"><span class="kn">import</span> <span class="nn">ollama</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl"><span class="c1"># open several textfiles</span>
</span></span><span class="line"><span class="cl"><span class="n">file</span> <span class="o">=</span> <span class="nb">open</span><span class="p">(</span><span class="s2">&#34;path/to/file.txt&#34;</span><span class="p">)</span>
</span></span><span class="line"><span class="cl"><span class="n">file2</span> <span class="o">=</span> <span class="nb">open</span><span class="p">(</span><span class="s2">&#34;path/to/file2.txt&#34;</span><span class="p">)</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl"><span class="c1"># concatenate the textfiles into a single string</span>
</span></span><span class="line"><span class="cl"><span class="nb">input</span> <span class="o">=</span> <span class="n">file</span><span class="o">.</span><span class="n">read</span><span class="p">()</span> <span class="o">+</span> <span class="n">file2</span><span class="o">.</span><span class="n">read</span><span class="p">()</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl"><span class="c1"># concatenate into the prompt</span>
</span></span><span class="line"><span class="cl"><span class="n">prompt</span> <span class="o">=</span> <span class="sa">f</span><span class="s1">&#39;Can you summarize the following text for me? </span><span class="si">{</span><span class="nb">input</span><span class="si">}</span><span class="s1">&#39;</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl"><span class="c1"># pass in the prompt to Ollama</span>
</span></span><span class="line"><span class="cl"><span class="n">response</span> <span class="o">=</span> <span class="n">ollama</span><span class="o">.</span><span class="n">chat</span><span class="p">(</span>
</span></span><span class="line"><span class="cl">    <span class="n">model</span><span class="o">=</span><span class="s1">&#39;gpt-oss:20b&#39;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">    <span class="n">messages</span><span class="o">=</span><span class="p">[</span>
</span></span><span class="line"><span class="cl">        <span class="p">{</span>
</span></span><span class="line"><span class="cl">            <span class="s1">&#39;role&#39;</span><span class="p">:</span> <span class="s1">&#39;user&#39;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">            <span class="s1">&#39;content&#39;</span><span class="p">:</span> <span class="n">prompt</span>
</span></span><span class="line"><span class="cl">        <span class="p">}</span>
</span></span><span class="line"><span class="cl">    <span class="p">]</span>
</span></span><span class="line"><span class="cl"><span class="p">)</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl"><span class="nb">print</span><span class="p">(</span><span class="n">response</span><span class="o">.</span><span class="n">message</span><span class="p">[</span><span class="s1">&#39;content&#39;</span><span class="p">])</span>
</span></span></code></pre></td></tr></table>
</blockquote><h2 id="dealing-with-context-limits">Dealing with Context Limits</h2>
<p>If you want to read in multiple files but the files are huge, you may exceed the context limits of your model. You can still concatenate files/strings where possible but you can circumvent this by creating several chats. You&rsquo;re basically running the code more than once, which means you can use a loop.</p>
<div class="highlight"><div class="chroma">
<table class="lntable"><tr><td class="lntd">
<pre tabindex="0" class="chroma"><code><span class="lnt"> 1
</span><span class="lnt"> 2
</span><span class="lnt"> 3
</span><span class="lnt"> 4
</span><span class="lnt"> 5
</span><span class="lnt"> 6
</span><span class="lnt"> 7
</span><span class="lnt"> 8
</span><span class="lnt"> 9
</span><span class="lnt">10
</span><span class="lnt">11
</span><span class="lnt">12
</span><span class="lnt">13
</span><span class="lnt">14
</span><span class="lnt">15
</span><span class="lnt">16
</span><span class="lnt">17
</span><span class="lnt">18
</span><span class="lnt">19
</span><span class="lnt">20
</span><span class="lnt">21
</span><span class="lnt">22
</span><span class="lnt">23
</span></code></pre></td>
<td class="lntd">
<pre tabindex="0" class="chroma"><code class="language-python" data-lang="python"><span class="line"><span class="cl"><span class="kn">import</span> <span class="nn">ollama</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl"><span class="c1"># open several textfiles and store them in a `files` list.</span>
</span></span><span class="line"><span class="cl"><span class="n">files</span> <span class="o">=</span> <span class="p">[]</span>
</span></span><span class="line"><span class="cl"><span class="n">files</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="nb">open</span><span class="p">(</span><span class="s2">&#34;path/to/file.txt&#34;</span><span class="p">))</span>
</span></span><span class="line"><span class="cl"><span class="n">files</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="nb">open</span><span class="p">(</span><span class="s2">&#34;path/to/file2.txt&#34;</span><span class="p">))</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl"><span class="c1"># run a chat for each file in the list</span>
</span></span><span class="line"><span class="cl"><span class="k">for</span> <span class="n">file</span> <span class="ow">in</span> <span class="n">files</span><span class="p">:</span>
</span></span><span class="line"><span class="cl">    <span class="n">prompt</span> <span class="o">=</span> <span class="sa">f</span><span class="s1">&#39;Can you summarize the following text for me? </span><span class="si">{</span><span class="n">file</span><span class="o">.</span><span class="n">read</span><span class="p">()</span><span class="si">}</span><span class="s1">&#39;</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">    <span class="c1"># pass in the prompt to Ollama</span>
</span></span><span class="line"><span class="cl">    <span class="n">response</span> <span class="o">=</span> <span class="n">ollama</span><span class="o">.</span><span class="n">chat</span><span class="p">(</span>
</span></span><span class="line"><span class="cl">        <span class="n">model</span><span class="o">=</span><span class="s1">&#39;gpt-oss:20b&#39;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">        <span class="n">messages</span><span class="o">=</span><span class="p">[</span>
</span></span><span class="line"><span class="cl">            <span class="p">{</span>
</span></span><span class="line"><span class="cl">                <span class="s1">&#39;role&#39;</span><span class="p">:</span> <span class="s1">&#39;user&#39;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">                <span class="s1">&#39;content&#39;</span><span class="p">:</span> <span class="n">prompt</span>
</span></span><span class="line"><span class="cl">            <span class="p">}</span>
</span></span><span class="line"><span class="cl">        <span class="p">]</span>
</span></span><span class="line"><span class="cl">    <span class="p">)</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">    <span class="nb">print</span><span class="p">(</span><span class="n">response</span><span class="o">.</span><span class="n">message</span><span class="p">[</span><span class="s1">&#39;content&#39;</span><span class="p">])</span>
</span></span></code></pre></td></tr></table>
</blockquote><p>The code above gives us separate summaries. But what if we want a single summary of all the files involved and they each exceed context limits? We can create a summary of each file, then create a summary of the summaries! It&rsquo;s all string concatenation when you really think about it.</p>
<div class="highlight"><div class="chroma">
<table class="lntable"><tr><td class="lntd">
<pre tabindex="0" class="chroma"><code><span class="lnt"> 1
</span><span class="lnt"> 2
</span><span class="lnt"> 3
</span><span class="lnt"> 4
</span><span class="lnt"> 5
</span><span class="lnt"> 6
</span><span class="lnt"> 7
</span><span class="lnt"> 8
</span><span class="lnt"> 9
</span><span class="lnt">10
</span><span class="lnt">11
</span><span class="lnt">12
</span><span class="lnt">13
</span><span class="lnt">14
</span><span class="lnt">15
</span><span class="lnt">16
</span><span class="lnt">17
</span><span class="lnt">18
</span><span class="lnt">19
</span><span class="lnt">20
</span><span class="lnt">21
</span><span class="lnt">22
</span><span class="lnt">23
</span><span class="lnt">24
</span><span class="lnt">25
</span><span class="lnt">26
</span><span class="lnt">27
</span><span class="lnt">28
</span><span class="lnt">29
</span><span class="lnt">30
</span><span class="lnt">31
</span><span class="lnt">32
</span><span class="lnt">33
</span><span class="lnt">34
</span><span class="lnt">35
</span><span class="lnt">36
</span><span class="lnt">37
</span><span class="lnt">38
</span><span class="lnt">39
</span><span class="lnt">40
</span><span class="lnt">41
</span><span class="lnt">42
</span><span class="lnt">43
</span><span class="lnt">44
</span><span class="lnt">45
</span><span class="lnt">46
</span></code></pre></td>
<td class="lntd">
<pre tabindex="0" class="chroma"><code class="language-python" data-lang="python"><span class="line"><span class="cl"><span class="kn">import</span> <span class="nn">ollama</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl"><span class="c1"># save summaries in a list to summarize later on</span>
</span></span><span class="line"><span class="cl"><span class="n">summaries</span> <span class="o">=</span> <span class="p">[]</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl"><span class="c1"># open several textfiles and store them in a `files` list.</span>
</span></span><span class="line"><span class="cl"><span class="n">files</span> <span class="o">=</span> <span class="p">[]</span>
</span></span><span class="line"><span class="cl"><span class="n">files</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="nb">open</span><span class="p">(</span><span class="s2">&#34;path/to/file.txt&#34;</span><span class="p">))</span>
</span></span><span class="line"><span class="cl"><span class="n">files</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="nb">open</span><span class="p">(</span><span class="s2">&#34;path/to/file2.txt&#34;</span><span class="p">))</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl"><span class="c1"># run a chat for each file in the list</span>
</span></span><span class="line"><span class="cl"><span class="k">for</span> <span class="n">file</span> <span class="ow">in</span> <span class="n">files</span><span class="p">:</span>
</span></span><span class="line"><span class="cl">    <span class="n">prompt</span> <span class="o">=</span> <span class="sa">f</span><span class="s1">&#39;Can you summarize the following text for me? </span><span class="si">{</span><span class="n">file</span><span class="o">.</span><span class="n">read</span><span class="p">()</span><span class="si">}</span><span class="s1">&#39;</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">    <span class="c1"># pass in the prompt to Ollama</span>
</span></span><span class="line"><span class="cl">    <span class="n">response</span> <span class="o">=</span> <span class="n">ollama</span><span class="o">.</span><span class="n">chat</span><span class="p">(</span>
</span></span><span class="line"><span class="cl">        <span class="n">model</span><span class="o">=</span><span class="s1">&#39;gpt-oss:20b&#39;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">        <span class="n">messages</span><span class="o">=</span><span class="p">[</span>
</span></span><span class="line"><span class="cl">            <span class="p">{</span>
</span></span><span class="line"><span class="cl">                <span class="s1">&#39;role&#39;</span><span class="p">:</span> <span class="s1">&#39;user&#39;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">                <span class="s1">&#39;content&#39;</span><span class="p">:</span> <span class="n">prompt</span>
</span></span><span class="line"><span class="cl">            <span class="p">}</span>
</span></span><span class="line"><span class="cl">        <span class="p">]</span>
</span></span><span class="line"><span class="cl">    <span class="p">)</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">    <span class="c1"># append the summary of the file to the summaries list for later</span>
</span></span><span class="line"><span class="cl">    <span class="n">summaries</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">response</span><span class="o">.</span><span class="n">message</span><span class="p">[</span><span class="s1">&#39;content&#39;</span><span class="p">])</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl"><span class="c1"># create a single string from our list of summaries</span>
</span></span><span class="line"><span class="cl"><span class="n">summaries_string</span> <span class="o">=</span> <span class="s2">&#34;</span><span class="se">\n</span><span class="s2">&#34;</span><span class="o">.</span><span class="n">join</span><span class="p">(</span><span class="n">summaries</span><span class="p">)</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl"><span class="c1"># start a final chat to summarize the summaries</span>
</span></span><span class="line"><span class="cl"><span class="n">prompt</span> <span class="o">=</span> <span class="sa">f</span><span class="s1">&#39;Can you summarize the following text for me? </span><span class="si">{</span><span class="n">summaries_string</span><span class="si">}</span><span class="s1">&#39;</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl"><span class="n">response</span> <span class="o">=</span> <span class="n">ollama</span><span class="o">.</span><span class="n">chat</span><span class="p">(</span>
</span></span><span class="line"><span class="cl">    <span class="n">model</span><span class="o">=</span><span class="s1">&#39;gpt-oss:20b&#39;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">    <span class="n">messages</span><span class="o">=</span><span class="p">[</span>
</span></span><span class="line"><span class="cl">        <span class="p">{</span>
</span></span><span class="line"><span class="cl">            <span class="s1">&#39;role&#39;</span><span class="p">:</span> <span class="s1">&#39;user&#39;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">            <span class="s1">&#39;content&#39;</span><span class="p">:</span> <span class="n">prompt</span>
</span></span><span class="line"><span class="cl">        <span class="p">}</span>
</span></span><span class="line"><span class="cl">    <span class="p">]</span>
</span></span><span class="line"><span class="cl"><span class="p">)</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl"><span class="nb">print</span><span class="p">(</span><span class="n">response</span><span class="o">.</span><span class="n">message</span><span class="p">[</span><span class="s1">&#39;content&#39;</span><span class="p">])</span>
</span></span></code></pre></td></tr></table>
</blockquote><p>That should be enough to get started. Thinking about all of this as just string manipulation made it &ldquo;click&rdquo; for me.</p>
<h2 id="references">References</h2>
<ul>
<li>

<a href="https://ollama.com/download/" target="_blank" rel="noopener">https://ollama.com/download/</a></li>
<li>

<a href="https://ollama.com/library/gpt-oss" target="_blank" rel="noopener">https://ollama.com/library/gpt-oss</a></li>
<li>

<a href="https://github.com/ollama/ollama-python/tree/main/examples" target="_blank" rel="noopener">https://github.com/ollama/ollama-python/tree/main/examples</a></li>
</ul>
]]></content:encoded>
    </item>
    <item>
      <title>AI-Generated Images Discourage Me From Reading Your Blog</title>
      <link>https://nelson.cloud/ai-generated-images-discourage-me-from-reading-your-blog/?ref=rss</link>
      <pubDate>Tue, 01 Oct 2024 00:00:00 +0000</pubDate>
      <guid>https://nelson.cloud/ai-generated-images-discourage-me-from-reading-your-blog/?ref=rss</guid>
      <description>If you&amp;rsquo;re willing to use AI-generated images, how do I know the text isn&amp;rsquo;t AI-generated?</description><content:encoded><![CDATA[<p>I have a growing hatred for AI-generated images in blogs. It makes me wonder if the text in the blog posts is AI-generated to some extent. It&rsquo;s always disappointing seeing these images in blogs run by individuals. I expect this from corporate blogs but not indie blogs.</p>
<p>I&rsquo;d rather see a shitty 

<a href="https://en.wikipedia.org/wiki/Microsoft_Paint" target="_blank" rel="noopener">Microsoft Paint</a> drawing as opposed to some AI image.</p>
<p>I know there&rsquo;s plenty of things you can roast my blog for but at least you know for a fact you&rsquo;re getting the thoughts of a real human being and not some 

<a href="https://en.wikipedia.org/wiki/Large_language_model" target="_blank" rel="noopener">LLM</a>.</p>
<p>If you run a personal blog, please avoid AI-generated images.</p>
<hr>
<p>Discussion over at 

<a href="https://news.ycombinator.com/item?id=42506989" target="_blank" rel="noopener">Hacker News</a></p>
]]></content:encoded>
    </item>
  </channel>
</rss>
