To: Subscribers of Not only Swift
Date: March 31, 2024
Issue: 78
Re: 🧠LLMs, SwiftUI, and SwiftHeroes 2025
Hey everyone!
Last week I mentioned that I vibe-coded the entire newsletter from scratch. In the past couple of days, I’ve cleaned up some of the issues I noticed, and also implemented the web version of the newsletter. So, if you’re reading this on the web, welcome! And if you’re reading this on email, you can now click through to the web version.
In this week’s issue, I’ve included a bunch of articles about how LLMs and LLM-powered tools work, hope you’ll find them interesting!
Of course, I’ve also included some Swift and SwiftUI content as well - enjoy!
As I am writing this, I am preparing for SwiftHeroes 2025 in Turin next week. If you’re going to be there, please say hi - I might have some swag for you!
The talk I am going to give is titled “Why every SwiftUI developer should care about the environment”, and it’s going to be a fun one. We’ll look at how the SwiftUI environment works, and how it is the backbone of what makes SwiftUI so powerful.
Cheers,
Peter
Swift
by Swift Team
Swiftly is a version manager for Swift toolchains. Now, if you’re an iOS developer, you probably never thought about toolchains, as Xcode takes care of everything for you. But if you’re a backend developer, or you’d like to try out any pre-release version of Swift, or - if you’d like to entirely sidestep Xcode - Swiftly is for you.
Swiftly isn’t the first version manager for Swift - Pol mentioned swiftenv in his article Managing multiple versions of Swift locally. But having an official tool is a great step forward.
by Michael Long
Ever since Apple introduced async/await, they’ve stopped talking about Combine almost entirely. You can still use Combine, and it hasn’t been marked as deprecated, but it seems like new APIs are being built mostly with async/await in mind.
In this article, Michael explores the pros and cons of using AsyncStream
vs Combine, both from an API ergonomic perspective, and from a performance perspective.
I agree with Michael that Combine has a pretty steep learning curve, but once you’ve learned it, its very powerful. I’d even go as far as to say that Combine is a more natural fit for many UI related use cases than async/await. For example, implementing a debounce for a search bar is much easier with Combine.
by Adam Wulf
MCP (Model Context Protocol) is a protocol that allows you to connext AI assistants to applications and data repositories. I previously mentioned that you can use it to generate Blender scenes, for example - but there are many other use cases.
Adam walks use through the process of setting up a MCP server in Swift, using the official MCP Swift SDK.
Anybody interested in building an MCP server for Keynote? Being able to chat with an LLM to draft a Keynote deck would be awesome!
SwiftUI
by Antonella Giugliano
If you have an app that has a search feature, exposing it as a search intent is a great way to expose it to Siri. Antonella shows how to do this using Assistant Schemas.
This is part of a series of articles about Assistant Schemas, so once you’ve read this, make sure to check out the other articles in the series.
AI & Machine Learning
by Brendan Bycroft
This is a fantastic interactive animation that explains the inner workings of large language models. The article focusses on the inference process (i.e, how the model generates text), and Brendan covers all the important steps and concepts you need to understand.
If you ever wondered how LLMs work, be sure to check it out. Many of the model components in the right hand side of the animation are interactive, so be sure to play around with them!
by André Bandarra
Temperature and top-k sampling are crucial parameters that control how language models generate text (you probably heard the the higher the temperature, the more “creative” the model output will be).
My colleague André built this really cool visualisation that allows you to play around with the parameters to see how they affect the model output.
by Roman Imankulov
As I mentioned in the intro, I used Cursor to vibe-code the entire newsletter infrastructure for Not only Swift. As you can see, it did a great job - although I had to provide some guidance to make it do exactly what I wanted.
So - how do Cursor’s AI feature work? Roman Imankulov takes us on a fascinating journey into the inner workings of Cursor’s AI system, revealing how it processes requests and interacts with your codebase.
What makes working with Cursor (and other AI code IDEs) so amazing is its agent feature - it basically allows the AI to take over your IDE, so it can actually create new files, folders, and even run build scripts and tests. This is a bit scary at first, but absolutely wild when you get the hang of it. Under the hood, agents make heavy use of function calling, and Roman provides a detailed list of the functions Cursor uses to complete your requests.
BTW - Genkit (Google’s AI integration framework) supports agents and tool calling as well, including interrupts, which allows you to obtain user input - just like you can see in Cursor.
by Anthropic
Understanding how large language models work internally has been one of the biggest challenges in AI research, and much of what’s going on inside these models is still a bit of a black box.
Anthropic’s researchers have developed a method to monitor the activation patterns of individual neurons and groups of neurons within their models. This allows them to trace somne of thought processes of the model, and understand how it makes decisions.
The article also explains how / why models sometims hallucinate, and strategies for dealing with this behaviour.
Tools & Tips
by Ahmed Khaleel
When working with a new codebase, it can be hard to understand the relationships between different parts of the codebase. Ahmed Khaleel has created GitDiagram, a powerful tool that turns any GitHub repository into an interactive diagram.
Just replace hub
with diagram
in any GitHub URL, and you’ll get a visual representation of the repository’s structure - it doesn’t get any easier than that.