-
-
Notifications
You must be signed in to change notification settings - Fork 1.3k
[WIP] New internal stats endpoint #6035
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
|
| end | ||
|
|
||
| scope private: %{allow_consolidated_views: true} do | ||
| post "/:domain/query", StatsController, :query |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know this is a WIP, but with a POST endpoint, we give up the ability to cache query responses client-side and in CDN. I think there's a lot to be gained in app responsiveness and scalability by even very basic caching - we shouldn't give that up.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This did cross my mind, but I deemed it unlikely that we ever want to browser/CDN cache dashboard stats requests because:
- Stats requests depend not only on the query parameters, but server-side state: site goals, props, imported data, feature flags, permissions, etc. Sounds like quite a challenge to invalidate this cache.
- JS memory caching (e.g. via TanStack Query that we're already using in the details modals) gives us better control over freshness and invalidation without relying on HTTP caches. Clicking around the dashboard (filters/period/with_imported) and landing on the same query -> cache hit. Refresh the entire dashboard -> everything is fresh.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, but how often do site goals, props, imported data, feature flags, permissions change?
Imagine 100 users accessing a public dashboard within one minute (e.g. a tweet with a link starts trending).
Do we want to make the 100 * 6 Clickhouse queries to populate the dashboard for each of them individually? What's the real harm in serving most of them let's say up to 2 minutes old data? It would make a difference for our servers, enabling us to scale up without adding more resources.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@apata raises good points. Caching has good potential for sure to reduce pressure on Clickhouse.
On the other hand, POST makes sense here because we need to be able to send a JSON payload. Making it a GET requires some sort of url serialization (bringing in jsonurl dependency elixir-side, using base64 or something like this). That can get a bit messy in terms of debuggability/logging.
Given that this is an internal endpoint that we can freely change in the future, I'm in favour of going with the simplest and most transparent option now (POST with json body). If we want to add http/cdn level caching in the future we can change it to a GET with url serialization. This is based on my assumption that changing it in the future is not a big job, limited to API client code in React and a single controller action on the backend.
Changes
Please describe the changes made in the pull request here.
Below you'll find a checklist. For each item on the list, check one option and delete the other.
Tests
Changelog
Documentation
Dark mode