App performance is a key part of user experience, and users are more likely to churn from a laggy product. That’s why at Teamflow, each time we ship a new version of our app, we ensure that it’s at least as fast as the last version. To do so, we test Teamflow’s performance in various scenarios using a specialized end-to-end automation suite.
By automating performance measurement, we get a baseline to which changes can be compared. We also can gauge the success of our past optimizations. The big challenge here was Teamflow’s multiplayer functionality – we couldn’t find a 3rd party tool that could measure app performance with multiple instances running simultaneously and communicating with each other.
Setting it up
We already used Playwright to run our end-to-end automated tests. Playwright spins up headless chromium browser instances that run your application and lets you control them programmatically. We decided to reuse Playwright to run performance tracing on our app. The idea was to have bots join a persistent workspace in our testing environment and run DevTools performance while the bots simulated a specific use case.
Unlike tests, however, we can’t measure app performance on a single node. After two or three browsers are spun (one per bot), our CI nodes would slow down dramatically or even crash. So we sharded bots into different nodes and added synchronization so bots would begin the simulation after everyone was ready. To communicate the status of each bot, we used one of Teamflow’s in-app features – the status bubble.
We created four predefined statuses for our performance bots
- “Waiting for others…”: When the bots are initially spun up and waiting for other bots to load in.
- “Warming up”: We have a warm up period on the profiling bot so that V8’s performance is not affected by cold startup.
- “Profiling!” / “Working”: The main bot profiles the app performance using DevTools while all bots are simulating a real office environment.
- “Done”: The teardown step where bots are done and stopped.
By using status messages, we avoided building yet another communication channel for our tests.
Dealing with the data
The “main” bot that runs the performance profiling step uses the Chrome DevTools protocol to capture a performance trace. The generated performance trace provides very detailed performance metrics and we collect the high-level information to our data warehouse. We look at the big six categories of CPU time in the trace – idle, loading, painting, rendering, scripting, and, other.
Each time we ship, we make sure that the blue idle bar keeps increasing!
Teamflow is the best place for remote and hybrid teams to collaborate and work together. Start your 30-day free trial today.