The web platform team in Chrome is working on built-in AI features, where the browser provides AI models, including large language models (LLMs), to enable on-device AI for browser features and web platform APIs. This is a game changer and a huge opportunity for WordPress to democratize AI-assisted publishing. Let me tell you why.
If WordPress does not want to fall behind its competitors, it must seamlessly provide typical AI features users nowadays expect from a publishing platform. There are already various AI WordPress plugins, but they all come at a cost—both literally and metaphorically. These plugins all rely on third-party server-side solutions, which impacts both your privacy and your wallet.
Web AI has several key benefits over a server-side approach. It brings models to the browser, protecting sensitive data and improving latency.
Web AI is the overarching term for the ability to run AI solutions in the browser using JavaScript, WebAssembly, and WebGPU. This space was pioneered by libraries such as TensorFlow.js and Transformers.js. Using those tools, websites can download models and run tasks of their choice directly in the browser.
Chrome’s built-in API
If everyone has to download and update these models all the time, this doesn’t really scale and isn’t really sustainable. That’s where Chrome’s built-in AI steps in. It is just one form of client-side AI or Web AI.
With the built-in AI, your site or web app will be able to run various AI tasks against foundation and expert models without having to worry about deploying and managing said models. Chrome achieves this by making Gemini Nano available through dedicated web platform APIs, running locally on most modern computers.
Note: This functionality is currently only available in Chrome Canary. Join the early preview program to learn how to access those early-stage built-in AI features and provide feedback.
At the moment, the Chrome team expects the built-in AI to be beneficial for both content creation and content consumption. During creation, this could include use cases such as writing assistance, proofreading, or rephrasing. On the consumption side, typical examples are summarization, translation, categorization, or answering questions about some content.
Early preview program participants will receive more detailed information about the following APIs:
To give you an example, using the prompt API is pretty straightforward:
const session = await ai.languageModel.create({
systemPrompt: "You are a friendly, helpful assistant specialized in clothing choices."
});
const result = await session.prompt(`
What should I wear today? It's sunny and I'm unsure between a t-shirt and a polo.
`);
console.log(result);
const result2 = await session.prompt(`
That sounds great, but oh no, it's actually going to rain! New advice??
`);Code language:JavaScript(javascript)
The other APIs are similarly straightforward to use. By the way, if you are an avid TypeScript user, there are already type definitions which I helped write.
Eager to build something with this new API? The Google Chrome Built-in AI Challenge challenge invites developers to explore new ground by creating solutions that leverage Chrome’s built-in AI APIs and models, including Gemini Nano. Cash prizes totaling $65,000 will be awarded to winners.
Web AI Advantages
The browser takes care of model distribution and updates for you, significantly reducing the complexity and overhead.
Everything is processed locally on-device, without sending your sensitive data elsewhere, keeping it safe and private.
No server round trips means you can get near-instant results.
You can access AI features even if you’re offline or have bad connectivity.
Save money by not having to use expensive third-party services or sending large models over a network.
This is not the first time where doing things client-side is better than on the server.1 You could of course consider a hybrid approach where you handle most use cases on-device and then leverage a server-side implementation for the more complex use cases.
Using built-in AI in WordPress
If WordPress core wants to offer AI capabilities to each and everyone of its users, it can’t jeopardize user’s privacy by relying on expensive third-party services. The project also does not have the resources or infrastructure to maintain its own API and running AI inference on a shared hosting provider in PHP is also not really something that works.
That’s why Web AI and particularly Chrome’s built-in AI are a perfect match for WordPress. With it, users benefit from a powerful Gemini Nano model that helps them accomplish everyday tasks.
The modern and extensible architecture of the WordPress post editor (Gutenberg) makes it a breeze to leverage Chrome’s built-in AI. To demonstrate this, I actually built several AI experiments. They enhance the user experience and accessibility for both editors and readers.
Uses Chrome’s built-in summarization API to provide readers a short summary of the post content. The UI is powered by WordPress’ new Interactivity API.
Writing meta descriptions based on the content
Using the dedicated summarizer API, the content can be easily summarized in only a few sentences.
“Help me write”
Options for rewriting individual paragraphs à la Google Doc, like rephrasing, shortening or elaborating.
Generate image captions / alternative text
Uses Transformers.js and Florence-2 to generate image captions and alternative text for images directly in the editor. Also integrated into Media Experiments, which supports video captioning too.
Generate a memorable quote
A slight variation on the summarization use case, this extracts a memorable quote from the article and gives it some visual emphasis.
Assigning tags & categories to blog posts
Suggest matching tags/categories based on the content, grabs a list of existing terms from the site and passes it to the prompt together with the post content.
Sentiment analysis for content / comments
Using a simple prompt to say whether the text is positive or negative. Could be used to suggest rephrasing the text à la Grammarly, or identify negative comments.
Summary
From summarizing content to more complex examples like automatically categorizing posts, the currently explored use cases peek at what’s possible with Web AI in WordPress. Their main advantage lies in their combined strength and the deep integration into WordPress, making for a seamless UX. Chrome’s built-in AI also makes this functionality ubiquitous, as every WordPress user could leverage it without any browser extension, plugin, or API. This is just the beginning. In the future, more complex AI features could push the boundaries of interacting with WordPress even further.
At WordCamp US 2024 I gave a presentation about client-side media processing, which is all about bringing WordPress’ media uploading and editing capabilities from the server to the browser. Watch the recording or check out the slides. This blog post is a written adaption of this talk.
A lot has changed since then. Not only have I built new features, but I also completely refactored the Media Experiments plugin that was all part of. For WordCamp US, I chose to put more focus on the technical aspects of media handling in WordPress and the benefits of a new browser-based approach.
If you haven’t seen it before, here’s a quick glimpse of what browser-based media processing allows us to do:
Contributors wanted
You can find everything covered in this article in the Media Experiments GitHub repository. It contains a working WordPress plugin that you can easily install on your site or even just test with one click using WordPress Playground.
The goal is to eventually bring parts of this into WordPress core itself, which is something I am currently working on.
To make this project a reality, I need your help! Please try the plugin and provide feedback for anything that catches your eye. Or even batter, check out the source code and help tackle some of the open issues.
Let WordPress help you
The WordPress project has a clear philosophy, with pillars such as “Design for the majority” and “Decisions, not options”. There is one section in that philosophy which particularly stands out to me:
The average WordPress user simply wants to be able to write without problems or interruption. These are the users that we design the software for
In my experience, when it comes to uploading images and other types of media, WordPress sometimes falls short of that. There are still many problems and interruptions. This is even more problematic nowadays as the web is more media-rich than ever before.
Minimizing frustration
Who hasn’t experienced issues when uploading media to WordPress?
Perhaps an image was too large and uploading takes forever, maybe even resulting in a timeout. And if it worked, the image was so large that it degraded your site’s performance.
Maybe you were trying to upload a photo from your phone, but WordPress doesn’t support it and tells you to please use another file. Or even worse, you upload a video, it succeeds, but then you realize none of the browsers actually support the video format.
In short: uploading media is a frustrating experience.
To work around these issues, you start googling how to manually convert and compress images before uploading them. Maybe even reducing the dimensions to save some bandwidth.
If you use videos, maybe you upload them to YouTube because you don’t want to bother with video formats too.
Maybe you switch your hosting provider because your server still takes too long to generate all those thumbnails or because it doesn’t support the newest image formats.
This is tedious and time consuming. WordPress should be taking work off your shoulders, not making your lives harder. So I set out to make this better. I wanted to turn this around and let WordPress help you.
At State of the Word 2023, WordPress co-founder Matt Mullenweg said the following about WordPress’ mission to Democratize Publishing:
We take things that used to require advanced technical knowledge and make it accessible to everyone.
Matt Mullenweg
And I think media uploads is a perfect opportunity for us to apply this. And the solution for that lies in the browser.
WebAssembly
Your server might not be capable to generate all those thumbnails or to convert a specific image format to something usable. But thanks to your own device’s computing power and technologies such as WebAssembly, we can fix this for you.
With WebAssembly you can compile code written in a language like Rust or C++ for running it in browsers with near-native performance. In the browser you can load WebAssembly modules via JavaScript and seamlessly send data back and forth.
At the core of my what I am showing you here is one such WebAssembly solution called wasm-vips. It is a port of the powerful libvips image processing library. That means any image operation that you can do with vips, you can now do in the browser.
Vips vs. ImageMagick
Vips is similar to ImageMagick, which WordPress typically uses, but has some serious advantages. For example, when WordPress loads vips in the browser it can always use the latest version. Whereas on the server we have to use whatever version that is available.
Sometimes, those are really old versions that have certain bugs or don’t support more modern image formats like AVIF. For hosts it can be challenging to upgrade, as they don’t want to break any sites. And even if ImageMagick already supports a format like AVIF, it could be very slow. Vips on the other hand is more performant, has more features, and even for older formats like JPEG it uses newer encoders with better results.
Client-side vs. server-side media processing
Traditionally, when you drop an image into the editor or the media library, it is sent to WordPress straight away. There, ImageMagick creates thumbnails for every registered image size one by one. That means a lot of waiting until WordPress can proceed. Here is where timeouts usually happen.
Eventually, once all the thumbnails are generated, WordPress creates a new attachment and sends it back to the editor. There, the editor can swap out the file you originally dropped with the final one returned by the server.
Compare this to the client-side approach using the vips image library:
Once you drop an image into the editor, a web worker creates thumbnails of it. A web worker runs in a separate thread from the editor, so none of the image processing affects your workflow. Plus, the cropping happens in parallel, which makes it a super fast process. Every thumbnail is then uploaded separately to the server. The server only has to do little work, just storing the file and returning the attachment data.
You immediately see all the updates in the editor after every step, so you have a much faster feedback loop. With this approach, the chances for errors, timeouts or memory issues are basically zero.
New use cases
The Media Experiments plugin contains tons of media-related features and enhancements. In this section I want to highlight some of them to better demonstrate what this new technology unlocks in WordPress.
Image compression
As shown in the demo at the beginning of the article, a key feature is the ability to compress or convert images directly in the browser. This works for existing images as well as new ones. All the thumbnails are generated in the browser as well.
Bonus: Did you see it? The plugin automatically adds AI-generated image captions and alt text for the image. This simply wouldn’t be possible on a typical WordPress server, but thanks to WebAssembly we can easily use AI models for such a task in the browser.
You can also compress all existing images in a blog post at once. The images can come from all sorts of sources too, for example from the image block, gallery block, or the post’s featured image.
In theory you could even do this for the whole media library. The tricky part of course is that your browser needs to be running. So that idea isn’t fully fleshed out yet.
Smart thumbnails
By default, when WordPress creates those 150×150 thumbnails it does a hard crop in the center of the image. For some photos that will lead to poor results where for example it cuts off the most relevant part of the picture, like a person’s head.
Vips supports saliency-aware image cropping out of the box, which looks for things like color saturation to determine a better crop.
At first you might think it is just a minor detail, but it’s actually really impactful. It just works, and it works for everybody! You will never have to worry about accidentally cropping off someone’s face again.
HEIC Images
If you use an iPhone you might have seen HEIC/HEIF images before, as it uses that format by default. It is a format with strong compression, but only Safari fully supports it.
Thanks to WebAssembly, WordPress can automatically convert such images to something usable. In this demo you will first notice a broken preview, as the (Chrome) browser doesn’t support the file format. But then it swiftly converts it to a JPEG, fixing the preview, and then uploads it to the server.
Bonus: this also works for JPEG XL, which is another format that only Safari supports.
Upload from your phone
In the above video I used an HEIC image which I previously took on my iPhone and then transferred to my computer. And from my computer I then uploaded it to WordPress. But what if you cut out the middleman?
In the editor, you can generate a QR code that you scan with your camera, or a URL that you can share with a colleague. Here, I am opening this URL in another browser, but let’s pretend it’s my phone. On your phone you then choose the image you want to upload. After that, it magically appears in the editor on your computer.
Media compression and conversion also works great for videos. When I record screencasts for this post, they will be in the MOV format, which doesn’t work in all browsers.
Thanks to ffmpeg.wasm, a WebAssembly port of the powerful FFmpeg framework, WordPress can convert them to a more universal format like MP4 or WebM. The same works for audio files as well.
This solution also generates poster images out of the box, which is important for user experience and performance.
Bonus: just like for image captions, AI can automatically subtitles for any video.
Animated GIFs
Sometimes you’re not dealing with videos though, but with GIFs. Who doesn’t like GIFs?
Well, the thing is, GIFs are actually really bad for user experience and performance. Not only are they usually very bad quality, they can also be huge in file size. Most likely you should not be using animated GIFs.
The good news is that animated GIFs are nothing but videos with a few key characteristics:
They play automatically.
They loop continuously
They’re silent.
By converting large GIFs to videos, you can save big on users’ bandwidth. And that’s exactly what WordPress can and should do for you.
In the following demo, I am dragging and dropping a GIF file from my computer to the editor. Since it is an image file, WordPress first creates an image block and starts the upload process.
Then, it detects that it is an animated GIF, and uses FFmpeg to convert it to an MP4 video. This happens in the blink of an eye. As it’s now a video, WordPress replaces the image block with a special GIF variation of the video block that’s looping and autoplaying. And of course the video is multiple times smaller than the original image file. As a user, you can’t tell the difference. It just works.
Media recording
Compressing videos and converting GIFs is cool, but one of my personal favorites is the ability to record videos or take still pictures directly in the editor, and then upload them straight to WordPress.
So if you’re writing some sort of tutorial and want to accompany it with a video, or if you are building the next TikTok competitor, you could do that with WordPress.
Bonus: You probably don’t see it well in the demo, but thanks to AI you can even blur your background for a little more privacy. Super cool!
Challenges
Client-side media processing adds a pretty powerful new dimension to WordPress, but it isn’t always as easy as it looks!
Cross-origin isolation
On the implementation side, cross-origin isolation is a tricky topic.
So, WebAssembly libraries like vips or ffmpeg use multiple threads to speed up the processing, which means they require shared memory. Shared memory means you need SharedArrayBuffer.
For security reasons, enabling SharedArrayBuffer requires a special configuration called cross-origin isolation. That puts a web page into a special state that enforces some restrictions when loading resources from other origins.
In the WordPress editor, I tried to implement this as smoothly as possible. Normally, you will not even realize that cross-origin isolation is in effect. However, some things in the editor might not work as expected anymore.
The most common issue I encountered is with embed previews in the editor.
So in Chrome, all your embed previews in the editor continue to work, while in Firefox or Safari they don’t because they do not support iframe credentialless when isolation is in effect.
I hope that Firefox and Safari remedy this in the future. Chrome is also working on an alternative proposal called Document-Isolation-Policy which would help resolve this as well. But that might still be years in the future.
Open source licenses (GPL compatibility)
Another unfortunate thing is that open source licenses aren’t always compatible with each other. This is the case with the HEIC conversion for those iPhone photos.
Being able to convert those iPhone photos directly in the browser before sending them to the server just makes so much sense. Unfortunately, it’s a very proprietary file format. The only open source implementation (libheif) is licensed under the LGPL 3.0, which is only compatible with GPL v3. However, WordPress’ license is GPLv2 or later.
That means we can’t actually use it 🙁
The good news is that we found another way, and it’s even already part of the next WordPress release!
However, this happens on the server again instead of the browser.
This is possible because on the server the conversion happens in ImageMagick (when compiled with libheif), and not in core itself, so there’s no license concern for WordPress.
The downside of this approach is that it will only work for very few WordPress sites, as it again depends on your PHP and ImageMagick versions. So while this is a nice step into the right direction, only with the client-side approach can we truly support this for everyone.
The next steps
All of these challenges simply mean there is still some work to do before it can be put into the hands of millions of WordPress users.
While this project started as a separate plugin, I am currently in the process of contributing these features step by step to Gutenberg, where we can further test them behind an experimental flag.
We start with the fundamental rewrite of the upload logic, adding support for image compression and thumbnail conversion. After that, we can look into format conversion, making it easier to use more modern image formats and choosing the format that is most suitable for any given image. From there, we can expand this to videos and audio files.
Finally and ideally, we expand beyond the post editor and make this available to the rest of WordPress, like the media gallery or anywhere else where one would upload files.
I am also really excited about the possibility of making this available to everyone building a block editor outside of WordPress, like Tumblr for example.
Democratizing publishing
With client-side media processing we make a giant leap forward when it comes to democratizing publishing.
As mentioned at the beginning, the average WordPress user simply wants to be able to write without problems or interruption. By eliminating all these problems related to media, users will be able to create media-rich content much easier and faster.
Thanks to client-side media processing, we can greatly improve the user experience around uploads. You benefit from faster uploads, fewer headaches, smaller images, and less overloaded servers. Also, you no longer need to worry about server support or switch hosting providers. Smaller images and more modern image formats help make your site load faster too, which is a nice little bonus.
While that GitHub Action works extremely well, the zero-setup approach has two drawbacks:
It is not possible to configure the test environment, for example by adding demo content or changing plugin configuration
It is not possible to test more complex scenarios, like any user interactions (e.g. for INP)
For (2) the best alternative right now is to go with the manual approach. For (1), I have now found a solution in WordPress Playground. Playground is a platform that lets you run WordPress instantly on any device. It can be seen as a replacement for the Docker-based @wordpress/env tool.
Using Blueprints for automated testing
One particular strength of WordPress Playground is the idea of Blueprints. Blueprints are JSON files for setting up your WordPress Playground instance. In other words, they are a declarative way for configuring WordPress—like a recipe. A blueprint for installing a specific theme and plugin could look like this:
The newly released version 2 of the performance testing GitHub Action now uses Blueprints under the hood to set up the testing environment and do things like importing demo content and installing mandatory plugins and themes. In addition to that, you can now use Blueprints for your own dedicated setup!
This way you can install additional plugins, change the site language, define some options, or even run arbitrary WP-CLI commands. There are tons of possible steps and also a Blueprints Gallery with real-world code examples.
To get started, add a new swissspidy/wp-performance-action@v2 step to your workflow (e.g. .github/workflows/build-test.yml):
The GitHub Action will now use your custom blueprint to install and activate your own custom plugin and performance-lab and akismet plugins from the plugin directory.
Alongside this new feature I also included several bug fixes for things I originally planned to add but never really finished. For instance, it is now actually possible to run the performance tests twice and then compare the difference between the results.
This way, when you submit a pull request you can run tests first for the main branch and then for your PR branch to quickly see at a glance how the PR affects performance. Here is an example:
jobs:
comparison:
runs-on: ubuntu-latest
steps:
# Check out the target branch and build the plugin# ...
- name: Run performance tests (before)
id: before
uses: ./
with:
urls: |
/
/sample-page/
plugins: |
./tests/dummy-plugin
blueprint: ./my-custom-blueprint.json
print-results: false
upload-artifacts: false# Check out the current branch and build the plugin# ...
- name: Run performance tests (after)
uses: ./
with:
urls: |
/
/sample-page/
plugins: |
./tests/dummy-plugin
blueprint: ./my-custom-blueprint.json
previous-results: ${{ steps.before.outputs.results }}
print-results: true
upload-artifacts: falseCode language:PHP(php)
The result will look a bit like this:
Playground is the future
Being able to use Playground for automated testing is really exciting. It simplifies a lot of the setup and speeds up the bootstrapping, even though the sites themselves aren’t as fast (yet) as when using a Docker-based setup. However, there is a lot of momentum behind WordPress Playground and it is getting better every day. Applications like this one further help push its boundaries.
After WordCamp US 2024, some core committers have started sharing their WordPress contribution workflows. Since mine appears to be a bit different from the other ones posted so far, I figured I’d follow suit. So here’s how I commit to WordPress!
The following sections cover everything that comes to mind right now. If there’s more in the future or something changes, I’ll try to update this post accordingy.
Separate repositories
I use separate folders for the checkouts of the Git repository and the SVN repository. The Git repository points to both official mirrors.
For daily work, I use Git. Be it for the local development environment, running tests, applying patches, or submitting pull requests.
Only for committing I take an actual patch file or PR and apply it in the SVN repository and then commit the change.
Local development
On my work laptop I cannot use Docker, so I can’t use the built-in development environment. Instead I use Local for running trunk locally. I symlinked the default site it creates to my Git checkout, which worked surprisingly well.
Aliases
Over the years my workflow hasn’t changed that much, so for the most frequently used commands I created aliases and put them in my dotfiles.
For Git I have a ton more aliases, but nowadays I only use a handful, like git squash, git amend, git undo, or git patch.
I might add the gh-patch alias Joe uses, as it is faster than using npm run grunt patch, especially since the latter always tries to first install npm dependencies, which is a bit annoying.
Committing
If I am committing on the command line, I use svn ci to open my default editor, which is nano. There I write the commit message, save the changes, and close the editor to push the commit.
I check the commit message guidelines frequently, which I think every core committer should do. Sometimes I think a custom linter in a pre-commit hook or so would be cool.
GUI
Most of the time I actually don’t use svn on the command line, but instead use Cornerstone, which as a GUI Subversion client for Mac. I think it was originally recommended to me by my friend and fellow core committer Dominik.
A graphical user interface provides this extra safety when dealing with more complex scenarios like adding or removing files, backporting commits, or changing file properties. On the command line it would easy to miss some things, but with a UI it’s much easier to see everything at a glance.
When contributing to WordPress core or related projects, a lot of the time is spent between WordPress Trac and GitHub. You typically open a new Trac ticket to propose an enhancement, then submit a pull request on GitHub with the necessary code changes. You may then even use Slack to discuss the change with fellow contributors. That’s now three platforms to participate in WordPress development—and three usernames to keep juggling between.
Some people (myself included) use the same username on all platforms, while others have completely separate ones on each of them. That makes it very inconvenient to track down people, for example when you want to chat with a contributor on Slack, or want to give a pull request author props in a Subversion commit. Luckily, the latter has become a bit easier thanks to Props Bot. Still, I regularly found myself using site:profiles.WordPress.org <username> Google searches to find someone’s WordPress.org account based on their GitHub information. Surely there is a better way to do this.
WordPressers on GitHub browser extension
Luckily, I found out about an API to get someone’s WordPress.org username based on their GitHub username. That’s exactly what I needed! Using this API, I built a very straightforward WordPressers on GitHub browser extension. This extension displays a WordPress logo next to a GitHub username if the person has a username on WordPress.org. Simply click on the logo to visit their profile, or hover over it to see a tooltip with their username.
Thanks to a great suggestion by Jonathan Desrosiers, the WordPress.org username is also automatically displayed in the bio when viewing a GitHub profile.
In the few weeks I’ve been using the extension, it has already come in handy a lot of times. Especially since I was a core tech lead for the WordPress 6.5 release and reviewing lots of pull requests during that time. May it be useful for you too 🙂
In the style of the WordPress mission statement to democratize publishing, I like to call this effort Democratizing Performance. Or in other words: performance for everyone. In my eyes, everyone should be able to have a fast website, regardless of their skill level or technical knowledge. To achieve this, we take things that used to require advanced technical knowledge and make it accessible to everyone.
Why does performance matter, you ask? Performance is essential for a great user experience on the web, and poor performance can negatively hurt your business. A slow website can lead to visitors leaving and not coming back. If your checkout process is slow, users might not end up buying products in your online store. With performance being a factor for search engines, it can also affect your site’s ranking.
There are many different aspects to performance within WordPress. Here, I am usually referring to the performance of the frontend of your website, as measured using metrics such as Core Web Vitals. Core Web Vitals are a set of performance metrics that measure loading performance, interactivity, and layout stability. WordPress has been working to improve its performance, and Core Web Vitals are a great way to measure that progress.
The WordPress core performance team was founded a few years ago. It is dedicated to monitoring, enhancing, and promoting performance in WordPress core and its surrounding ecosystem. Having a dedicated team for this kind of effort shows that the community understands the rising complexity of today’s websites. This way. WordPress is well-equipped to cater for these use cases in a performant way.
The team’s activities can be roughly grouped into three categories:
Improving core itself, providing new APIs, fixing slow code and measuring improvements
Working with the ecosystem to help people adopt best practices and make their projects faster
Providing tools and documentation to facilitate doing so.
Tackling performance in an open source project like WordPress involves more than improving the core software itself. This is different from closed platforms, where you don’t have to worry about elevating an entire ecosystem with thousands of plugins and themes. Democratizing performance is not something that WordPress or the core performance team can do alone. It takes all of us, including site assemblers and extenders, to work together to raise the bar for everyone.
Recent core performance improvements
Still, there are some things we can do in core itself. Despite the performance team’s young age it already has a proven track record of performance enhancements. To name a few:
Automated performance testing using Playwright, feeding metrics into a public dashboard
Improvements to image lazy loading. This includes adding the fetchpriority attribute to the image which is likely to be the LCP element.
Improve emoji support detection
In my talk, I highlighted the emoji change because it’s such a great example of improving performance for everyone. Ever since WordPress added emoji support 10 years ago, it loads a little bit of JavaScript on every page to see whether your browser supports the latest and greatest emoji (there are new ones almost every year). It turns out that doing so on every page load is quite wasteful — who would have thought!
Fortunately, there are better ways to do this. Since last year, this emoji detection happens only once per visit, caching results for subsequent visits in the same session. Additionally, the detection now happens in a web worker, keeping the main thread free for more important work. This change was the main contributing factor to a ~30% client-side performance improvement in WordPress 6.3, compared to WordPress 6.2. To benefit from this, all you had to do was update your website — a great example of performance for everyone.
Measuring success
Such impressive numbers are testament to the focus on data-driven decision making in WordPress. It’s important to base our work on actual numbers rather than a gut feeling. Getting these numbers is a two-fold process.
Reproducible, automated testing in a controlled environment where you measure the desired metrics for each code change to verify improvements. Also known as lab testing or lab data.
Measuring how WordPress performs for millions of actual sites in the world. This is so-called field data.
This kind of field data is available for free through datasets such as HTTP Archive and the Chrome UX Report. The latter is the official dataset of the Web Vitals program. All user-centric Core Web Vitals metrics are represented there. These datasets provide a more accurate picture of WordPress performance in the wild, so this is the data we want to positively influence. The report on WordPress performance impact on Core Web Vitals in 2023 covers some recent highlights in this regard.
WordPress performance in 2024
The improvements in the last couple of years were already pretty impressive. Still, the performance team continues working hard on even further improvements. The current year is still relatively young, but there are already some exciting new changes in the works:
Performant translations Previously, localized WordPress sites could be up to 30% slower than a regular site. Thanks to a new translation library in WordPress 6.5 this difference is now almost completely eliminated.
Interactivity performance Interaction to Next Paint (INP) is now officially a Core Web Vital. Interactivity performance is therefore top of mind for our team. We’re currently identifying common key problems and opportunities to improve interactivity bottlenecks in WordPress. We are also spreading the word about the new interactivity API in WordPress 6.5.
Client-side image processing You might have heard about my media experiments work already. Essentially, we want to bring most of the image processing in WordPress from the server to the client. This enables exciting things like image optimization or AVIF conversion directly in the browser, regardless of what server you are on.
Speculative page prerendering There is now a new feature plugin that adds support for speculative prerendering for near-instant page loads. This is a new browser API for prerendering the next pages in the background, for example when hovering over a link. This is a great example of how WordPress can embrace the web platform and provide native support for new APIs to developers.
As I said in the beginning of this article, democratizing performance involves more than making WordPress itself faster. It’s the responsibility of site builders and developers too. That’s why we try to help the ecosystem track performance and adopt new features we build. Be it through WordCamp talks like this one or more advanced documentation. My recent blog posts on WordPress performance testing and the Plugin Check plugin are great examples of that effort. Both tools are also available as GitHub Actions, making it really easy to get started.
Performance Lab is also a great tool for us to improve performance well beyond WordPress core. If you haven’t heard about it yet, Performance Lab is a collection of performance-related feature plugins. It allows you to test new features before they eventually end up in a new WordPress release. This way, the team can validate ideas at a larger scale and iterate on them more quickly. Once they are stable, we can propose merging them into core. Or, sometimes we find that a particular enhancement doesn’t work that well, so we discard it.
Dreaming bigger
As you can see, there is a lot going on already in the performance space. But what if we go further than that? What if we dream bigger? I want to spark your imagination a little bit by thinking about some other exciting things that we could build. For example, there could be a performance section in Query Monitor. Or imagine your browser’s developer tools not only telling how to improve your slow JavaScript, but also how to do it in the context of WordPress.
Platforms like WP Hive and PluginTests.com are also promising. They basically test all plugins in the entire plugin directory to see if they work as expected. They also measure memory usage and performance impact on both the frontend and backend. Thanks to a browser extension, this information is surfaced directly inside the plugin directory. Ironically, we actually already have the infrastructure available on WordPress.org to do this ourselves. Tide was built exactly for this purpose. Unfortunately the project has stalled since its original inception 6 years ago, but what if it came back?
Finally, how does artificial intelligence fit into this? Of course in 2024 you kind of have to mention AI one way or the other. Imagine an AI assistant in your WordPress admin that tells you how to optimize your site’s configuration. Or the assistant in your code editor having deep knowledge of the latest WordPress APIs and telling you how to load your plugin’s JavaScript in a more efficient way. You don’t have to be an expert to benefit from such helpers, making it possible for everyone to have a fast website.
Conclusion
Performance is a critical factor for any website. WordPress is committed to making good performance accessible to everyone, and the results are showing. Still, there is a lot of work to be done and a lot of opportunities to improve performance at scale. The core performance team can’t do this alone. It takes all of us — site owners, site builders, developers, agencies — to make the web a better — and faster — place.
A while back the Plugin Check tool was first announced, and version 1.0 is just around the corner. It’s a plugin to test your WordPress plugins 🤯. Specifically, it is a tool for testing whether your plugin meets the required standards for the WordPress.org plugin directory. Additionally, Plugin Check flags violations or concerns around plugin development best practices in areas such as internationalization, accessibility, performance, and security.
This joint effort between the plugin review and the core performance teams is why I want to make a case for using Plugin Check for your existing WordPress plugin. While the similar Theme Check plugin only focuses on theme submissions, Plugin Check’s goal is to be helpful even during development. The tool has two categories of sniffs: static checks (using static analysis tools like PHP_CodeSniffer) and runtime checks, where it actually activates your plugin to test it “live”.
Many of these checks or sniffs are not fully available yet, but here are some examples of what the test tool can flag in the future:
Scripts and styles exceeding a certain file size
Unnecessarily enqueueing scripts and styles on every page instead of only when needed
Unnecessarily marking database options as autoloaded, slowing down the alloptions query
If more plugins follow best practices like these, the plugin ecosystem will be in much better shape performance-wise! That’s why now is the ideal time to start using the tool, so that you are setup for success already today and have a headstart once these checks are implemented.
Integrating Plugin Check
So how can you incorporate Plugin Check into your development workflow?
One way is to simply install the plugin on a local environment and run it against your plugin.
Another way, and what I would recommend, is to integrate it into your Continuous Integration (CI) pipeline.
For this reason I built a dedicated GitHub action. It automatically runs Plugin Check and posts all results as annotations on your source files so you know exactly where to look for resolving any errors or warnings.
Integration can be as simple as this:
name: 'build-test'
on: # rebuild any PRs and main branch changes
pull_request:
push:
branches:
- main
- 'releases/*'
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Run plugin check
uses: WordPress/plugin-check-action@v1Code language:PHP(php)
As for the static checks, if you are already using PHPCS, you don’t really need Plugin Check to run the same sniffs twice, so those can be disabled:
Learn how to set up Playwright-based end-to-end performance testing for your own WordPress project.
Introduction
End-to-end (E2E) tests are a type of software testing that verifies the behavior of a software application from, well, end to end. They simulate an actual user interacting with the application to verify that it behaves as expected. E2E tests are important because they can help to identify and fix bugs that may not be caught by unit tests or other types of testing. Additionally, they can help to ensure that the application is performing as expected under real-world conditions, with real user flows that are typical for the application. This means starting an actual web server, installing WordPress, and interacting with the website through a browser. For example, the majority of the block editor is covered extensively by end-to-end tests.
Performance testing
Browser-based performance testing is a subset of this kind of testing. Such tests measure the speed and reactivity of the website in order to find performance regressions. This includes common metrics such as Web Vitals or page load time, but also dedicated metrics that are more tailored to your project. For instance, Gutenberg tracks things like typing speed and the time it takes to open the block inserter.
Both WordPress core and Gutenberg use Playwright for end-to-end and performance tests. It supports multiple browsers and operating systems, and provides great developer experience thanks to a resilient API and powerful tooling. If you know Puppeteer, Playwright is a forked and enhanced version of it. The WordPress project is actually still undergoing a migration from Puppeteer to Playwright.
This article shows how to set up Playwright-based end-to-end tests for your own project, with a focus on performance testing. To familiarize yourself with how Playwright works, explore their Getting Started guide. Would you like to jump straight to the code? Check out this example project on GitHub! It provides a ready-to-use boilerplate for Playwright-based performance tests that you can add to your existing project.
Before diving right into the details of writing performance tests and fiddling with reporting, there is also a shortcut to get your feet wet.
Most of what I cover in this article is also available in a single, ready-to-use GitHub Action. You can easily add it to almost any project with little to no configuration. Here’s an example of the minimum setup needed:
Using this action will spin up a new WordPress installation, install your desired plugins and themes, run Playwright tests against the provided pages on that site, and print easy to understand results to the workflow summary.
This one-stop solution allows you to quickly get started with performance testing in a WordPress context and helps to familiarize yourself with the topic. It might even cover all of your needs already, which would be even better! Another big advantage of such a GitHub Action is that you will automatically benefit from new changes made to it. And If you ever need more, continue reading below to learn how you can do it yourself.
Update (September 2024): check out my follow-up post about v2 of this GitHub Action using WordPress Playground.
Setting up Playwright tests for a WordPress plugin/theme
Reminder: if you want a head start on setting up Playwright tests, check out the example project on GitHub. It provides a ready-to-use boilerplate with everything that’s covered below.
This article assumes that you are developing WordPress blocks, plugins, themes, or even a whole WordPress site, and are familiar with the common @wordpress/scripts and @wordpress/env toolstack. The env package allows you to quickly spin up a local WordPress site using Docker, whereas the scripts package offers a range of programs to lint, format, build, and test your code. This conveniently includes Playwright tests! In addition to that, the @wordpress/e2e-test-utils-playwright package offers a set of useful helpers for writing Playwright tests for a WordPress project.
All you need to get started is installing these packages using npm:
Check out the @wordpress/env documentation on how to further configure or customize your local environment, for example to automatically install and activate your plugin/theme in this new WordPress site.
Note: if you already have @wordpress/env installed or use another local development environment, skip this step and use your existing setup.
To run the Playwright tests with @wordpress/scripts, use the command npx wp-scripts test-playwright. If you have a custom Playwright configuration file in your project root directory, it will be automatically picked up. Otherwise, provide the path like so:
In a custom config file like this one you can override some of the details from the default configuration provided by @wordpress/scripts. Refer to the documentation for a list of possible options. Most commonly, you would need this to customize the default test timeout, the directory where test artifacts are stored, or how often each test should be repeated.
Writing your first end-to-end browser test
The aforementioned utilities package hides most of the complexity of writing end-to-end tests and provides functionality for the most common interactions with WordPress. Your first test could be as simple as this:
When running npx wp-scripts test-playwright, this test visits /wp-admin/ and waits for the “Welcome to WordPress” meta box heading to be visible. And before all tests run (in this case there is only one), it ensures the Twenty-Twenty One theme is activated. That’s it! No need to wait for the page to load or anything, Playwright handles everything for you. And thanks to the locator API, the test is self-explanatory as well.
Locators are very similar to what’s offered by Testing Library in case you have used that one before. But if you are new to this kind of testing API, it’s worth strolling through the documentation a bit more. That said, the easiest way to write a new Playwright test is by recording one using the test generator. That’s right, Playwright comes with the ability to generate tests for you as you perform actions in the browser and will automatically pick the right locators for you. This can be done using a VS Code extension or by simply running npx playwright codegen. Very handy!
Setting up performance tests
The jump from a simple end-to-end test to a performance test is not so big. The key difference is performing some additional tasks after visiting a page, further processing the collected metrics, and then repeating all that multiple times to get more accurate results. This is where things get interesting!
First, you need to determine what you want to measure. When running performance tests with an actual browser, it’s of course interesting to simply measure how fast your pages load. From there you can expand to measuring more client-side metrics such as specific user interactions or Web Vitals like Largest Contentful Paint. But you can also focus more on server-side metrics, such as how long it takes for your plugin to perform a specific task during page load. It all depends on your specific project requirements.
Writing your first performance test
Putting all of these pieces together, we can turn a simple end-to-end test into a performance test. Let’s track the time to first byte (TTFB) as a start.
import { test } from'@wordpress/e2e-test-utils-playwright';
test.describe( 'Front End', () => {
test.use( {
storageState: {}, // User will be logged out.
} );
test.beforeAll( async ( { requestUtils } ) => {
await requestUtils.activateTheme( 'twentytwentyone' );
} );
const iterations = 20;
for ( let i = 1; i <= iterations; i++ ) {
test( `Measure TTFB (${ i } of ${ iterations })`, async ( {
page,
metrics,
} ) => {
await page.goto( '/' );
const ttfb = await metrics.getTimeToFirstByte();
console.log( `TTFB: ${ttfb}`);
} );
}
} );Code language:JavaScript(javascript)
What’s standing out is that Playwright’s storageState is reset for the tests, ensuring that tests are performed as a logged-out user. This is because being logged in could skew results. Of course for some other scenarios this is not necessarily desired. It all depends on what you are testing.
Second, a for loop around the test() block allows running the test multiple times. It’s worth noting that the loop should be outside the test and not inside. This way, Playwright can ensure proper test isolation, so that a new page is created with every iteration. It will be completely isolated from the other pages, like in incognito mode.
The metrics object used in the test is a so-called test fixture provided by, you guessed it, the e2e utils package we’ve previously installed. How convenient! From now on, most of the time we will be using this fixture.
Measuring all the things
Server-Timing
The Server-Timing HTTP response header is a way for the server to send information about server-side metrics to the client. This is useful to get answers for things like:
Was there a cache hit
How long did it take to load translations
How long did it take to load X from the database
How many database queries were performed
How much memory was used
The last ones are admittedly a bit of a stretch. Server-Timing is meant for duration values, not counts. But it’s the most convenient way to send such metrics because they can be processed in JavaScript even after a page navigation. For Playwright-based performance testing this is perfect.
In WordPress, the easiest way to add Server-Timing headers is by using the Performance Lab plugin. By default it supports exposing the following metrics:
wp-before-template: Time it takes for WordPress to initialize, i.e. from the start of WordPress’s execution until it begins sending the template output to the client.
wp-template: Time it takes to compute and render the template, which begins right after the above metric has been measured.
wp-total: Time it takes for WordPress to respond entirely, i.e. this is simply the sum of wp-before-template + wp-template.
Additional metrics can be added via the perflab_server_timing_register_metric() function. For example, this adds the number of database queries to the header:
Besides getServerTiming() and getTimeToFirstByte(), the metrics fixture provides a handful of other helpers to measure certain load time metrics to make your life easier:
getLargestContentfulPaint: Returns the Largest Contentful Paint (LCP) value using the dedicated API.
getCumulativeLayoutShift: Returns the Cumulative Layout Shift (CLS) value using the dedicated API.
getLoadingDurations: Returns the loading durations using the Navigation Timing API. All the durations exclude the server response time. The returned object contains serverResponse, firstPaint, domContentLoaded, loaded, firstContentfulPaint, timeSinceResponseEnd.
Some of these methods are mostly there because it’s trivial to retrieve the metrics, but not all of these might make sense for your use case.
Tracing
The metrics fixture provides an easy way to access Chromium’s trace event profiling tool. It allows you to get more insights into what Chrome is doing “under the hood” when interacting with a page. To give you an example of what this means, in Gutenberg this is used to measure things like typing speed.
// Start tracing.await metrics.startTracing();
// Type the testing sequence into the empty paragraph.await paragraph.type( 'x'.repeat( iterations ) );
// Stop tracing.await metrics.stopTracing();
// Get the durations.const [ keyDownEvents, keyPressEvents, keyUpEvents ] =
metrics.getTypingEventDurations();Code language:JavaScript(javascript)
In addition to getTypingEventDurations() there are also getSelectionEventDurations(), getClickEventDurations(), and getHoverEventDurations().
Lighthouse reports
The @wordpress/e2e-test-utils-playwright package has basic support for running Lighthouse reports for a given page. Support is basic because it only performs a handful of audits and does not yet allow any configuration. Also, due to the way Lighthouse works, it’s much slower than taking similar measurements by hand using simple JavaScript snippets. That’s because it does a lot of things under the hood like applying CPU and network throttling to emulate mobile connection speeds. Still, it can be useful to compare numbers and provide feedback to the folks working on this package to further improve it. A basic example:
This one line is enough to run a Lighthouse report, which involves opening a new isolated browser instance on a dedicated port for running tests in.
Interactivity metrics
The metrics fixture already provides ways to get some Web Vitals values such as First Contentful Paint (FCP), Largest Contentful Paint (LCP), and Cumulative Layout Shift (CLS). They cover loading performance and layout stability. Interaction to Next Paint (INP), a pending Core Web Vital metric that will replace First Input Delay (FID) in March 2024, is notably absent from that list. That’s because it’s not so trivial to retrieve, as it requires user interaction.
INP is a metric that assesses a page’s overall responsiveness to user interactions. It does so by observing the latency of all click, tap, and keyboard interactions that occur throughout the lifespan of a user’s visit to a page. The final INP value is the longest interaction observed, ignoring outliers. So how can you measure that reliably in an automated test? Enter the web-vitals library.
This library is the easiest way to measure all the Web Vitals metrics in a way that accurately matches how they’re measured by Chrome and reported to tools like PageSpeed Insights.
As of very recently (i.e. it’s not even released yet!), the metrics fixture has preliminary support for web-vitals.js and allows measuring web vitals using one simple method:
Under the hood, this will refresh the page while simultaneously loading the library and collecting numbers. This will measure CLS, FCP, FID, INP, LCP, and TTFB metrics for the given page and return all the ones that exist.
Again, metrics like INP require user interaction. To accommodate for that, separate the loading and collection part like so:
await metrics.initWebVitals( /* reload */false );
await page.goto( '/some-other-page/' ); // web-vitals.js will be loaded now.// Interact with page here...console.log( await metrics.getWebVitals() );Code language:JavaScript(javascript)
You may find that retrieving web vitals using this single method is easier than calling separate getLargestContentfulPaint() and getCumulativeLayoutShift() methods, though the reported numbers will be identical. In the future these methods may be consolidated into one.
Making sense of performance metrics
With the foundation for running performance tests set and all these functions to retrieve various metrics available, the next step is to actually collect all this data in a uniform way. What’s needed is a way to store results and ideally compare them with earlier results. This is so you are actually able to make sense of all these metrics and to identify performance regressions.
For this purpose, I’ve built a custom test reporter that takes metrics collected in tests and combines them all in one single file. Then, a second command-line script formats the data and optionally does a comparison as well. The reporter and the CLI script are both available on the demo GitHub repository, together with all the other example code from this article. Here’s an example output by this script:
Note: there is work underway to further refine these scripts and make them easier available through a dedicated npm package. Imagine a single package like @wordpress/performance-tests that provides the whole suite of tools ready to go! This is currently being discussed and I will update this post accordingly when something like this happens.
In a GitHub Action, you would run this combination of scripts in this order:
Start web server (optional, as Playwright will start it for you otherwise)
Run tests
Optionally run tests for the previous commit or target branch
The raw results are also available as a build artifact and a step output. This way you don’t have to unnecessarily run tests twice but can reuse previous results
Run the CLI script to format results and optionally compare them with the ones from step 3
Eventually you will get to a point where you want to see the bigger picture and track your project’s performance over time. For example using a dedicated dashboard such as the one WordPress core currently uses.
When doing so, you will inevitably need to deal with data storage and visualization, and things like variance between individual runs. These are not yet currently solved problems, both for WordPress projects but also in general. In a future blog post I plan to go a bit more in depth on this side of things and show you how to set this up for your project, allowing you to make more sense of performance metrics over time.
Conclusion
With the foundation from this blog post you should be able to start writing and running your first performance tests for your WordPress project. However, there is still a lot that can be covered and optimized, as performance testing can be quite a complex matter.
And of course please let me know your thoughts in the comments so the team and I can further refine the techniques shared in this post. Thanks for reading!
Over the past few years I’ve grown to like PHPStan, a static analysis for PHP projects. It’s a great tool for catching bugs and improving code quality. In this post, I’ll show you how to use PHPStan for WordPress plugin or theme development.
What PHPStan does
There are many tools for analyzing PHP code tools out there, such as PHP-Parallel-Lint for syntax checks, or PHP_CodeSniffer to check conformance with coding standards. The latter is often used in WordPress projects (including core itself!) thanks to the dedicated WordPress Coding Standards ruleset.
In addition to these tools, PHPStan tries to find bugs based on the information it derives from typehints and PHPDoc annotations, without actually running your code or writing any tests. Among the things it tests are the existence and accessibility of classes, methods, and functions, argument mismatches, and of course type mismatches.
Strongly-typed code gives PHPStan more information to work with. Keep in mind that typehinted and annotated code helps both static analysis tools and people understand the code.
Getting started
While I try to cover the basics of how to set up PHPStan, I recommend reading the project’s excellent Getting Started guide, which goes more into detail.
To get the ball rolling, install PHPStan via Composer:
After that you could already try running it like this:
vendor/bin/phpstan analyse src
Here, src is the folder containing all your plugin’s files.
I usually prefer setting such commands as Composer scripts so I don’t have to remember them in detail. Plus, the src path can be omitted if it’s defined in a configuration file. But more on that later.
Note how I also increase the memory limit. I’ve found this to be necessary for most projects as PHPStan tends to consume quite a lot of memory for larger code bases.
Work around PHP version requirement
PHPStan requires PHP >= 7.2 to run, but your actual code does not have to use PHP 7.x. So if your project’s minimum supported PHP version is lower than 7.2, I suggest a small workaround: install PHPStan in a separate directory with its own composer.json file, like build-cs/composer.json.
This file is separate from your project’s Composer configuration and contains only the dependencies with a higher version requirement:
This way, running composer run phpstan will analyze your code as usual, even if the main configuration requires an older PHP version.
Note: It goes without saying that I highly recommend reconsidering your project’s version requirements. Then you won’t have to add such workarounds.
Telling PHPStan about WordPress
Remember how PHPStan analyzes your code to check the existence of classes and functions? It looks for them in all your analyzed files as well as your Composer dependencies. But WordPress core isn’t really a dependency of your WordPress plugin.
So without any additional configuration PHPStan will first not know about WordPress-specific code like WP_DEBUG and WP_Block. You have to first make it aware of them.
Thankfully, the php-stubs/wordpress-stubs package provides stub declarations for WordPress core functions, classes and interfaces. They’re like source code, but only the PHPDocs are read from it. However, we are not going to use them directly.
Instead, you should install the WordPress extension for PHPStan. Not only does it load the php-stubs/wordpress-stubs package, it also defines some core constants, handles special functions like is_wp_error() or get_posts(), and apply_filters() usage.
Simply require the extension in your project and you are all set:
If your WordPress plugin or theme integrates with other plugins like for example WooCommerce, you will also need to provide stub declarations for them to PHPStan.
You can also generate stubs for other projects yourself using the available generator library.
Baseline configuration
Now that we have installed all the necessary tools, we can start with our initial configuration.
Here it’s important to know about the different rule levels PHPStan supports. The default level is 0 and is for the most basic checks. With each level, more checks are added. Level 9 is the strictest.
If you want to use PHPStan but your codebase isn’t quite there yet, you can start with a lower level and increment it over time.
Alternatively, you can use PHPStan’s baseline feature to ignore currently reported errors in subsequent runs, focusing only on new and changed code. This way you can check newer code at a higher level, giving you time to fix the existing code later.
Let’s say we want to start with level 1 and analyze everything in our src folder as mentioned above. Our minimum configuration file will look like this:
Save this file as phpstan.neon.dist. PHPStan uses a configuration format called NEON which is similar to YAML, hence the file name.
This configuration file is also where you point PHPStan to all the different stub declarations if you have any.
Now you can truly run composer run phpstan and PHPStan will analyze your project while being fully aware about the WordPress context.
Some errors you might see after your initial run are “If condition is always true” or “Call to an undefined function xyz()”. Some of them are pretty easy to resolve, others require consulting the documentation or the help community.
Improving PHPDocs
PHPStan relies on typehints and PHPDoc comments to understand your code. PHPDocs can also provide additional information, such as what’s in an array.
When there are errors being reported in your existing code base, chances are that the PHPDoc annotations can be improved.
Learn more about PHPDocs basics and all the types PHPStan understands. It also supports some proprietary @phpstan- prefixed tags that are worth checking out.
Sometimes you might get errors because of incorrect PHPDocs in one of your dependencies, like WordPress core. In this case, I suggest temporarily ignore the error in PHPStan and submit a ticket and pull request to improve the documentation upstream.
Usage in REST API code
A special case for some WordPress projects when it comes to PHPDoc is the WordPress REST API. If you have ever extended it in any way, for example by adding new fields to the schema, you’ll know how the fields available in a WP_REST_Request or WP_REST_Response object are based on that schema (plus some special fields for the former, like context or _embed).
Let’s say you have some code like this in your REST API controller:
A static analysis tool like PHPStan cannot know that per_page is a valid request param defined in your schema. $request['per_page'] could be anything and thus $per_page will be treated as having a mixed type.
However, this should only be used as a last resort as it quickly leads to a lot of repetition and the annotations can easily become outdated.
Luckily, the WordPress stubs package provides a more correct way to describe this code, using array shapes:
/**
* @param WP_REST_Request $request Full details about the request.
* @return WP_REST_Response|WP_Error Response object on success, WP_Error object on failure.
*
* @phpstan-param WP_REST_Request<array{post?: int, orderby?: string}> $request
*/publicfunctioncreate_item( $request ){
// This is an int.
$parent_post = ! empty( $request['post'] ) ? $request['post'] : null;
// This is a string.
$orderby = ! empty( $request['orderby'] ) ? $request['orderby'] : 'date';
}Code language:PHP(php)
Using PHPStan for tests
So far I have focused on using PHPStan for analyzing your plugin’s main codebase. But why stop there? Static analysis is also tremendously helpful for finding issues with your tests and assertions therein.
To set this up, you will need to install the PHPStan PHPUnit extension and rules, as well as the WordPress core test suite stubs so PHPStan knows about all the classes and functions coming from there, like all the factory methods.
Again, I find running PHPStan on tests very useful. It helps write better assertions, find flawed tests, and uncover room for improvement in your main codebase.
Running PHPStan on GitHub Actions
Once you have set up everything locally, configuring the static analysis to run on a Continuous Integration (CI) service like GitHub Actions becomes a breeze. You don’t need to configure a custom action or anything. Simply setting up PHP and installing Composer dependencies is enough. Here’s an example workflow:
So far I have introduced the WordPress and PHPUnit extensions for PHPStan, but there are many more that can be valuable additions to your setup. Remember: with phpstan/extension-installer they will be available to PHPStan automatically, but some of them can be further tweaked via the configuration file.
swissspidy/phpstan-no-private Detects usage of WordPress functions and classes that are marked as @access private and should not be used outside of core. And yes, I actually wrote this one.
phpstan/phpstan-deprecation-rules Similar to the above, this extension detects usage of functions and classes that are marked as @deprecated and should no longer be used. Again very useful in a WordPress context.
phpstan/phpstan-strict-rules The highest level in PHPStan is already quite strict, but if you want even more strictness and type safety, this is the extension you want. It’s also possible to enable only some rules.
johnbillion/wp-compat Helps verify that your PHP code is compatible with a given version of WordPress. It will warn you for things like using functions from newer versions without an accompanying function_exists() check.
Wrapping Up
I hope this post serves as a good introduction to using PHPStan in a WordPress project. There are so many more things I could cover, like how to set the PHP version, the PHPStan Pro offering, or the interactive playground. But I didn’t want to make this even more overwhelming, so maybe I’ll save these tips for another post.
If you have any questions about PHPStan or want to share your experiences with it, I would love to hear them in the comments!
Finally, if you end up using PHPStan in your WordPress project, consider donating to either PHPStan itself, or Viktor Szépe who maintains the WordPress extension and stubs.