Cloud

Gaining insights from (and into) AI coding assistants


I’d been needing to refactor the pagination logic in the Mastodon plugin for Steampipe. After a couple of abortive tries, I took another run at it this week with the help of the latest generation of LLM-powered coding assistants.

Here was the problem. The pre-release version of the plugin consolidated pagination for many tables in one place. That was a good thing, but the downside was that there was only one Steampipe table which represented what should have been many of them. So you could say select * from mastodon_timeline but then you had qualify with where timeline="home" or where timeline="local" and so on. For a user of the plugin this was awkward, you’d rather say select * from mastodon_timeline_home or select * from mastodon_timeline_local, and reserve the where clause for more specific purposes.

The v1 plugin made separate tables, but duplicated the pagination logic on a per-table basis. It worked, and was good enough to ship the plugin in time to demo at FediForum, but it obviously needed improvement.

ChatGPT-4 and Sourcegraph Cody

Since then, Sourcegraph has released its new coding assistant, Cody, which you can run as a VS Code extension or on sourcegraph.com. This set up the possibility for an interesting comparison. ChatGPT-4 builds on OpenAI’s LLM; Sourcegraph’s Cody, on the other hand, uses Anthropic’s Claude.

Another key difference is that ChatGPT only has the context you paste into it. Cody, sitting inside VS Code, can see your repository and has all that context. And if you index your repo, which is something Sourcegraph are willing to do for beta users on request, then Cody has access to what are called embeddings that represent the structure of your code in various ways. These embeddings, according to Sourcegraph, can powerfully enhance your LLM prompts.

Even without embeddings, Cody offers quite a range of assistance, from a high-level overview of what your repo does to line-level improvement. It’s all packaged, in the extension, as a set of recipes behind buttons with names like Explain selected code, Improve variable names, and Smell code. I haven’t yet used these recipes enough to form solid opinions, though. For this exercise I used Cody mostly in a ChatGPT-like conversational way. In that mode, it’s wonderful to be able to select the code you want to talk about, instead of pasting it into the chat.

In both cases, as should be no surprise, it wasn’t enough to just ask the tools to consolidate the pagination logic. They were perfectly happy to propose solutions that could never work and might not even compile. So I began with a simpler version of the problem. Mastodon uses the same pagination machinery for APIs that return arrays of different kinds of results: Statuses (toots), Accounts, and Notifications. By focusing on these separately I reduced the duplicate pagination from 13 instances to three. Then, in a separate pass, I worked out how to collapse those into a single paginate function that accepted one of three data-fetching function parameters.

I tried to pay careful attention to prompts and completions as I went along, but in the heat of the action I didn’t do a great job of that, partly because I was switching back and forth between the two tools. But I’m quite happy with the result. There was one key insight in particular which, fascinatingly, I am hard pressed to assign credit for. Was it me or one of the assistants? I think it was me, but in a way that doesn’t matter, and isn’t the point of this story.

The key insight

Here was the insight. When I was building the transitional paginateStatus function, the first attempt returned results to the calling code in each table’s List function, which was responsible for streaming the data to Steampipe. This led to a series of detours to work around the problem that the returned data could be quite large, and chew up a lot of memory. That could probably be solved with a goroutine that would stream results back to the caller, instead of returning them as a batch. I tried prodding both LLMs to come up with that kind of solution, had no luck with several tries in both cases, but then came the insight. The helper functions could stream results directly to Steampipe, and just return nil or err to the calling List function.

With that dramatic simplication I was able to complete the phase 1 refactoring, which yielded three pagination functions: paginateStatus, paginateAccount, and paginateNotification. Phase 2, which consolidated those into a single paginate function, was a bit more prosaic. I did need some help understanding how the necessary switch statements could switch on the timeline types passed into the paginate function. Both assistants had seen lots of examples of this pattern, and both helpfully augmented my imperfect knowledge of golang idioms.

Partnering with machine intelligence

I came away with a profound sense that the real value of these assistants isn’t any particular piece of code that they get “right” or “wrong” but rather the process of collaborating with them. When you’re working alone, you have an ongoing conversation with yourself, usually in your own head. The point of talking to a rubber duck is to voice that conversation so you can more effectively reason about it.

Externalizing your thinking in that way is intrinsically valuable. But when the rubber duck talks back, it’s a whole new game. As Garry Kasparov famously wrote:

The winner was revealed to be not a grandmaster with a state-of-the-art PC but a pair of amateur American chess players using three computers at the same time. Their skill at manipulating and coaching their computers to look very deeply into positions effectively counteracted the superior chess understanding of their grandmaster opponents and the greater computational power of other participants. Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.

I’m not worried about robot overlords. Instead, I look forward to collaborating with robot partners.

This series:

  1. Autonomy, packet size, friction, fanout, and velocity
  2. Mastodon, Steampipe, and RSS
  3. Browsing the fediverse
  4. A Bloomberg terminal for Mastodon
  5. Create your own Mastodon UX
  6. Lists and people on Mastodon
  7. How many people in my Mastodon feed also tweeted today?
  8. Instance-qualified Mastodon URLs
  9. Mastodon relationship graphs
  10. Working with Mastodon lists
  11. Images considered harmful (sometimes)
  12. Mapping the wider fediverse
  13. Protocols, APIs, and conventions
  14. News in the fediverse
  15. Mapping people and tags in Mastodon
  16. Visualizing Mastodon server moderation
  17. Mastodon timelines for teams
  18. The Mastodon plugin is now available on the Steampipe Hub
  19. Migrating Mastodon lists
  20. When the rubber duck talks back

Copyright © 2023 IDG Communications, Inc.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.