Replies: 1 comment 2 replies
-
Hi @jejass. Thanks for reaching out. You are not seeing updated data in Client B because of kernel caching. To disable kernel cache, you will have to use |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello team,
Here is the scenario. Both client A and client B mount Storage account via blobfuse2. Client A is responsible for write, Client B is responsible for read.
I test 2 configurations, with both of them, client B cannot read the latest updated blob, even after waiting for 1 hour.
configuration 1:
logging:
type: syslog
level: log_debug
components:
libfuse:
attribute-expiration-sec: 0
entry-expiration-sec: 0
negative-entry-expiration-sec: 0
file_cache:
path: /mnt/blobfusetmp
timeout-sec: 0
max-size-mb: 0
refresh-sec: 1
attr_cache:
timeout-sec: 0
configuration 2:
logging:
type: syslog
level: log_debug
components:
libfuse:
attribute-expiration-sec: 0
entry-expiration-sec: 0
negative-entry-expiration-sec: 0
block_cache:
block-size-mb: 1
mem-size-mb: 1000
path: /mnt/blobfusetmp
disk-size-mb: 1
disk-timeout-sec: 1
attr_cache:
timeout-sec: 0
I understand 'direct_io' parameter can disable kernal page cache. but with this parameter, the application will report an error.
So, I want to figure out a way in the blobfuse2 configuration to make Client B can read the latest update.
is that possible?
as per my testing, it seems the Client B will read the blob once. After then, the read Client B will not refresh the content, even with timeout 0, cache size 0. I found the parameter 'refresh-sec': . but it does not work as well.
is that how blobfuse2 works? the read client will never refresh the content in any configuration?
thank you in advance.
Beta Was this translation helpful? Give feedback.
All reactions