Why Modern Web Development is a Mess
node_modules folder, crazy webpack build times, or thousands of trivial
leftpad-like imported modules. I‘d like to look at the modern web from a different perspective.
From the perspective of UI automation.
I was working on UI automation for a while. The task is seemingly simple — simulate user interactions with a web page. It’s useful for end-to-end or performance testing of your application or just pure automation of certain activities. A typical tool for web UI automation is Selenium or any frameworks that use it, e.g. Serenity BDD.
When you write UI automation, you don’t really care about how the web application is written, whether it’s a single page application in React or Angular or just plain old HTML with JQuery. All you care about is, you want to interact with some elements and observe some expected behavior.
Imagine a simple case of clicking two elements in succession, e.g. triggering a combobox to open and then selecting something inside the dropdown that appears.
Seems trivial. Just look at your web page as an API:
- clicking on a trigger button is similar to a REST API call;
- waiting for the dropdown to appear is similar to getting a synchronous response;
- clicking on a dropdown element is similar to invoking another API call.
But if you did UI automation, you know it’s never that simple.
You quickly come to an understanding that modern web UI is crazy brittle. If you start working with it as with an API, it just falls apart.
Why would it matter? I mean, you interact with a machine, right? The machine should work in a deterministic way. If you click on two elements in succession, they should always work the same, regardless of whether you click on them with a 1-second delay or with a 1-millisecond delay. Right?
There’s a lot of asynchronous stuff under the hood that you can’t influence or observe in any way:
- the dropdown may cause an asynchronous backend request to get the options to display;
- after getting the options to display from the backend, it may update the DOM to show them;
- after updating the DOM, it may also add event listeners to new DOM elements to make sure the dropdown is closed whenever you pick an option;
- all of those actions may happen as separate callbacks, some of them may or may not be observable by the automation engine.
There are so many asynchronous races going on here that your automation may fail 30% of the time, or 50% of the time, or just once in 1000 runs.
Moreover, some of the things are not even observable at all. How can you make sure all proper asynchronous code related to the element being displayed and interactable is finished?
What bothers me is nobody ever fixes those races or is rarely even aware of them. Why? Because no human could possibly click elements so quickly. The problem is not reproducible. If the problem is not reproducible, you don’t need to fix it. Or do you?
Now imagine you’re a backend developer, and you write a backend API with the same assumptions:
“Oh, nobody will call this API faster than once every millisecond, so I can just rely on that and hack together some asynchronous thread-unsafe fire-and-forget spagetti code. Well, probably it will randomly overlap and fail every 1000 calls, but who cares.”
Sounds crazy, but not in the frontend world.
Asynchronicity of the interface is not the issue here. It’s the absence of determinism. The absence of understanding how your code really executes, and what are dependencies between various parts.
Can you deterministically say what will happen if a user invokes two calls in your application with a 1-millisecond delay? If not, there’s something fundamentally wrong with your code.