Caching - collecting use cases #557
Replies: 7 comments
-
Relay, for me the big benefit is the data normalisation, not so much the caching. E.g. you change the user name here, it changes everywhere, as Relay understands that this is the same user object fetched from GraphQl. This data normalisation is harder problem in Elm, as the records have different shapes. JS doesn't care about this. |
Beta Was this translation helpful? Give feedback.
-
Hey @sporto, thank you for sharing your insights here! I like the way you're framing it. Essentially, there are two concerns:
It would indeed require a lot of creativity to build a caching solution in Elm! One way I can imagine would be to have opaque types for data which stores the raw JSON alongside the JSON decoder. You could store all your queries in a store and have them be updated under the hood. It certainly starts to sound very heavy weight! I had a nice conversation today with Thibaut Assus about an approach he's been using for the Data Consistency problem. He's been using a special type of Subscriptions. It occurred to me that getting data using Subscriptions can address this problem if they are set up to trigger a new payload when the appropriate data changes. Some frameworks like Prisma have this built in. I've been wondering whether Subscriptions might just be the best solution to this problem of Data Consistency. I'd be curious to hear other people's thoughts. |
Beta Was this translation helpful? Give feedback.
-
Hey @dillonkearns! On number 2, Personally, I find it really difficult to keep track of all the queries that need to be invalidated for each mutation when the app grows, but I don't see any other practical solution as it is impossible to guess the queries to refetch by parsing the mutation. A simple example is the a Login mutation that returns a token. In my app, the user has projects, so the cache for the projects needs to be invalidated when the user logins (it could be during logout as well), but the Login mutation does not have any reference to projets. Maybe a hybrid approach, where the user can specify the queries to refetch, and a processing is done to map the ids from the mutation to the ids of the cached queries. But this is becoming really heavy... |
Beta Was this translation helpful? Give feedback.
-
@tmattio thank you for sharing the concrete use case and example from experience here, that's super helpful! This is a big idea and I think it's worth "blue skying" a bit and sketching out some possibilities for how this could look in the context of |
Beta Was this translation helpful? Give feedback.
-
I recently watched this talk which put caching into perspective. I'm no graphql expert but it seems to me like caching is a high-complexity, low return, optimization that would need to be implemented in extremely rare and high-load circumstances. |
Beta Was this translation helpful? Give feedback.
-
Another interesting use case that I just learned is offline-first web applications: See this apollo-client discussion and a linked cache solution currently maintained by the apollo team. Edit: |
Beta Was this translation helpful? Give feedback.
-
The biggest advantage is definitely data consistency for the apps I've worked in. I haven't found a situation in which performance issues couldn't be fixed by something simpler than normalized caching. |
Beta Was this translation helpful? Give feedback.
-
I don't have any concrete plans to add caching, and I'm not exactly sure what it would look like in Elm! But I'm curious to understand this topic better.
I would really appreciate insights on any of the following questions:
Please leave a comment if you have thoughts on this. Thank you!
Beta Was this translation helpful? Give feedback.
All reactions