
ZDNET's key takeaways
- Google's Jules coding agent is out of beta, with clear pricing tiers.
- Even the free tier has newer, more generous usage limits.
- Updates include improved AI and better setup management.
Google's coding AI Jules is out of beta. Announced in a blog post on Wednesday, the coding helper will be powered by Google's Gemini 2.5 Pro LLM. You might recall that Google also recently announced its Gemini CLI GitHub Actions coding tool.
What do all these developments mean? Let's unpack the announcements.
Jules vs. Gemini CLI GitHub Actions
Jules and Gemini CLI GitHub Actions are different beasts. Jules is meant for big projects, significant changes, and project planning.
Also: Google embeds AI agents deep into its data stack - here's what they can do for you
Gemini CLI GitHub Actions augments GitHub's workflow tool, GitHub Actions. The Gemini CLI version interacts with GitHub, helps respond to issues (problem reports), generates and responds to pull requests (submitted changes), and collaborates with other coders working on the same project.
Here's a handy table that helps show the differences between the assistants:
That was then
When I first tried Jules, it was the first day it became available as a beta offering. Although the assistant had some early server and availability problems, I was still able to add a new feature to my open-source project. I then deployed that new feature to all my users.
That was on May 27. Since I didn't suddenly have 20,000 angry users screaming at me that I broke their sites (yes, I've done that before), it's pretty clear that the Jules additions worked well.
Also: Google's Jules AI coding agent built a new feature I could actually ship - while I made coffee
What I wanted to like about Jules is that it produces a plan of action in response to your prompt. You can (in theory) interact with Jules to refine that plan of action before it goes off and makes changes to your code.
And when I first tried the tool, Jules did produce a plan of action. But like my little Yorkie pup, it didn't possess the patience to wait for me to say it was okay to go. This issue leads to small dogs trying to climb into your dinner dish, and big AIs that blast through an entire codebase in minutes.
Fortunately, Jules branches your codebase. This feature ensures nothing is incorporated into your main release until after you approve it, as you would any other proposed change on GitHub.
This is now
As of August 6, Jules is officially out of beta. The official release has what Google called a "new, more user-friendly interface."
Jules also allows developers to reuse previous setups. This could be quite valuable because having to train the AI on your code for every coding request can substantially slow the process down and potentially introduce errors. This way, once you have a working prompt foundation, you can reuse it.
Also: Gemini Pro 2.5 is a stunningly capable coding assistant - and a big threat to ChatGPT
Jules also now has multimodal support. Google said: "Jules can test your web application and show you a visual representation of the results. This can help Jules iterate until it gets things right and gives you confidence in the code Jules creates."
When I tested Gemini 2.5 Pro against my coding test suite, it did astonishingly well. Earlier versions of Gemini and its predecessor, Bard, did not perform nearly as well.
Gemini 2.5 prioritizes speed and efficiency. In other words, it costs Google less to run each prompt. Google Gemini 2.5 Pro is intended for deep reasoning, accuracy, and handling complex tasks. Google prices Gemini 2.5 Pro at about 8 1/2 times the cost of Gemini 2.5, so we can pretty much assume Gemini 2.5 Pro uses a lot more resources for each query.
As you read about this announcement, you might find some confusion. Based on the blog post, it's not fully clear whether Jules is powered by Gemini 2.5 or a combination of Gemini 2.5 and Gemini 2.5 Pro. According to Google's blog post announcing the public availability of Jules, the tool is "powered by Gemini 2.5." But later in the blog post, Google said: "Jules now uses the advanced thinking capabilities of Gemini 2.5 Pro to develop coding plans."
I asked Google's team about the discrepancy. It confirmed to ZDNET that Jules uses the much more powerful reasoning model to plan out actions, and then executes those actions, also with the Gemini 2.5 Pro LLM. Given how impressive my testing was with 2.5 Pro, that's a good thing.
Also: Claude Code makes it easy to trigger a code check now with this simple command
Jules is offered in three pricing tiers. There are different limits for each pricing tier. When I tried Jules back in May, I was only given five tasks per day. I was able to make a feature addition to my project in just two tasks. You could still get quite a bit done even then.
As of now, the free version of Jules allows 15 tasks per day, which is good. If you take the time to craft clear prompts, you're unlikely to need a lot more than that number each day.
However, if you want more, you can move to Google's $20/month Pro plan or $250/month Ultra plan.
Also: The best AI for coding in 2025 (including a new winner - and what not to use)
Stay tuned. I plan to use the release version of Jules on another coding modification. I'll report back on my experiences.
What about you?
Did you try Jules during the beta? How does it compare to other AI coding assistants you've used? Do you see value in its planning-based approach, or do you prefer a more hands-on coding helper? And what do you make of the apparent confusion over which Gemini model the assistant uses? Let us know in the comments below.
You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter, and follow me on Twitter/X at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, on Bluesky at @DavidGewirtz.com, and on YouTube at YouTube.com/DavidGewirtzTV.