Cross-browser testing may not always be performed that accurately, so you should definitely start tracking JS error that may happen of different clients.
This is totally not an idea of mine: it comes from a pretty smart blog post which illustrates the main concept: when a JS error is encountered, you trigger an HTTP request to a URL that collect the data transmitted within that request and logs it with server-side code.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
So, at the end, you only need to add some basic server-side code to handle the reported data:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
You may want to write some additional
code to only report errors that you should
really fix: based on the user-agent, for
example, you can ignore errors triggered
All in all…
This has been a great solution for us, since we could easily keep track of JS code which was causing errors due to:
- lack of compatibility between developers’/users’ platforms
- typos and small errors
- tricky situations in which our code depends on 3rd party scripts that would break our functionality whenever they are not available/cause an error upon execution
- NodeJS ↩