Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

java.lang.UnsupportedOperationException: io.lettuce.core.output.ValueOutput does not support set(long) #3121

Closed
wangchengming666 opened this issue Jan 10, 2025 · 5 comments

Comments

@wangchengming666
Copy link

wangchengming666 commented Jan 10, 2025

error message as blow

Caused by: java.lang.UnsupportedOperationException: io.lettuce.core.output.StatusOutput does not support set(long)
	at io.lettuce.core.output.CommandOutput.set(CommandOutput.java:107)
	at io.lettuce.core.protocol.RedisStateMachine.safeSet(RedisStateMachine.java:774)
	at io.lettuce.core.protocol.RedisStateMachine.handleInteger(RedisStateMachine.java:409)
	at io.lettuce.core.protocol.RedisStateMachine$State$Type.handle(RedisStateMachine.java:205)
	at io.lettuce.core.protocol.RedisStateMachine.doDecode(RedisStateMachine.java:339)
	at io.lettuce.core.protocol.RedisStateMachine.decode(RedisStateMachine.java:300)
	at io.lettuce.core.protocol.CommandHandler.decode(CommandHandler.java:840)
	at io.lettuce.core.protocol.CommandHandler.decode0(CommandHandler.java:791)
	at io.lettuce.core.protocol.CommandHandler.decode(CommandHandler.java:765)
	at io.lettuce.core.protocol.CommandHandler.decode(CommandHandler.java:657)
	at io.lettuce.core.protocol.CommandHandler.channelRead(CommandHandler.java:597)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
	at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:795)
	at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480)
	at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
	... 1 common frames omitted

when search this problem in issue lists, there have been similar issues in the past.

@tishun
Copy link
Collaborator

tishun commented Jan 11, 2025

Thanks @wangchengming666 for the report!

Now having just one exception would be hard to diagnose the underlying problem. There could be many reasons why this happens. Do you have some way to reproduce it? What were the commands that were sent and were there other issues before that (OOM, connectivity, etc.)? Does it repeat or it only happens once? Does the driver stop working until restart?

@tishun tishun added the status: waiting-for-feedback We need additional information before we can continue label Jan 11, 2025
@hayk96
Copy link

hayk96 commented Jan 15, 2025

Hello,

We encountered this issue as well. Unfortunately, it wasn't reproducible, but we observed that only a single pod was affected, and all errors originated from that pod. When the container reached its maximum memory limit, Kubernetes terminated it due to OOM. Afterward, the application started properly, resolving the issue. We noted that the affected pod had slightly higher memory usage than other replicas from its start time (though the importance of this is unclear).

Image

@wangchengming666
Copy link
Author

Thanks @wangchengming666 for the report!

Now having just one exception would be hard to diagnose the underlying problem. There could be many reasons why this happens. Do you have some way to reproduce it? What were the commands that were sent and were there other issues before that (OOM, connectivity, etc.)? Does it repeat or it only happens once? Does the driver stop working until restart?

Unfortunately, I cannot reproduce this issue because it occurred during the program's runtime, suddenly appeared, and persisted for a while. After restarting, it was resolved.

@tishun
Copy link
Collaborator

tishun commented Jan 17, 2025

Hello,

We encountered this issue as well. Unfortunately, it wasn't reproducible, but we observed that only a single pod was affected, and all errors originated from that pod. When the container reached its maximum memory limit, Kubernetes terminated it due to OOM. Afterward, the application started properly, resolving the issue. We noted that the affected pod had slightly higher memory usage than other replicas from its start time (though the importance of this is unclear).

Image

This is quite likely the issue described in #3132

@tishun
Copy link
Collaborator

tishun commented Jan 17, 2025

Unfortunately, I cannot reproduce this issue because it occurred during the program's runtime, suddenly appeared, and persisted for a while. After restarting, it was resolved.

Out-of-sync issues could appear when some major error occurs (e.g. OOM or some other unhandled error while decoding the server response). In this case the event loop would skip processing this response and would attempt to map the next response to the same command, leading to this out-of-sync issue. It could also manifest with wrong response contents.

Overall the solution in #3132 should reduce the occurrence of such issues, but ideally please record and stack traces that precede the out-of-sync so we can better identify the root cause.

I will close this ticket as it is not actionable without more data. If you analyse the logs and find the issue that lead to the out-of-sync please reopen and include it.

@tishun tishun closed this as completed Jan 17, 2025
@tishun tishun removed the status: waiting-for-feedback We need additional information before we can continue label Jan 17, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants