Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

High CPU usage - #120

Closed
Anto79-ops opened this issue Jul 27, 2022 · 17 comments
Closed

High CPU usage - #120

Anto79-ops opened this issue Jul 27, 2022 · 17 comments

Comments

@Anto79-ops
Copy link

Anto79-ops commented Jul 27, 2022

Hi,

trying to get some help figuring out a problem im having, and have landed here. Running a script (called ecowitt2mqtt) on a RPi 4 Bullseye, that dumps data to my mqtt broker (data it obtains from my local weather station), and then HA discoveres it. The script host (RPi 4), the broker (Ubuntu 22.04), and HA are 3 different instances on the same network. After hours or days, one of the cores on my RPi 4 that is running the script chokes, and runs at 100% (or ~30% CPU total).

Ran a pyspy on the instance, and caught 700 errors, but nothing too conclusive. Running:

$ strace -p <pid> -f -s 4096

on the stuck process, yields this:

[pid 121259] recvfrom(14, 0x7f8683aaa0, 1, 0, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable)
[pid 121259] epoll_pwait(3, [{EPOLLIN, {u32=13, u64=13}}], 1024, 76, NULL, 8) = 1
[pid 121259] recvfrom(14, 0x7f8683a1a0, 1, 0, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable)
[pid 121259] epoll_pwait(3, [{EPOLLIN, {u32=13, u64=13}}], 1024, 76, NULL, 8) = 1
[pid 121259] recvfrom(14, 0x7f8683aaa0, 1, 0, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable)
[pid 121259] epoll_pwait(3, [{EPOLLIN, {u32=13, u64=13}}], 1024, 76, NULL, 8) = 1
[pid 121259] recvfrom(14, 0x7f8683a1a0, 1, 0, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable)
[pid 121259] epoll_pwait(3, [{EPOLLIN, {u32=13, u64=13}}], 1024, 76, NULL, 8) = 1
[pid 121259] recvfrom(14, 0x7f8683aaa0, 1, 0, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable)
[pid 121259] epoll_pwait(3, [{EPOLLIN, {u32=13, u64=13}}], 1024, 75, NULL, 8) = 1
[pid 121259] recvfrom(14, 0x7f8683a1a0, 1, 0, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable)
[pid 121259] epoll_pwait(3, [{EPOLLIN, {u32=13, u64=13}}], 1024, 75, NULL, 8) = 1
[pid 121259] recvfrom(14, 0x7f8683aaa0, 1, 0, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable)
[pid 121259] epoll_pwait(3, [{EPOLLIN, {u32=13, u64=13}}], 1024, 75, NULL, 8) = 1
[pid 121259] recvfrom(14, 0x7f8683a1a0, 1, 0, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable)
[pid 121259] epoll_pwait(3, [{EPOLLIN, {u32=13, u64=13}}], 1024, 74, NULL, 8) = 1
[pid 121259] recvfrom(14, 0x7f8683aaa0, 1, 0, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable)
[pid 121259] epoll_pwait(3, [{EPOLLIN, {u32=13, u64=13}}], 1024, 74, NULL, 8) = 1
[pid 121259] recvfrom(14, 0x7f8683a1a0, 1, 0, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable)
[pid 121259] epoll_pwait(3, [{EPOLLIN, {u32=13, u64=13}}], 1024, 74, NULL, 8) = 1
[pid 121259] recvfrom(14, 0x7f8683aaa0, 1, 0, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable)
[pid 121259] epoll_pwait(3, [{EPOLLIN, {u32=13, u64=13}}], 1024, 74, NULL, 8) = 1
[pid 121259] recvfrom(14, 0x7f8683a1a0, 1, 0, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable)
[pid 121259] epoll_pwait(3, [{EPOLLIN, {u32=13, u64=13}}], 1024, 73, NULL, 8) = 1
[pid 121259] recvfrom(14, 0x7f8683aaa0, 1, 0, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable)
[pid 121259] epoll_pwait(3, [{EPOLLIN, {u32=13, u64=13}}], 1024, 73, NULL, 8) = 1
[pid 121259] recvfrom(14, ^Cstrace: Process 121259 detached

and then

$ lsof -p <pid> -n

yields these descriptors:

COMMAND      PID USER   FD      TYPE             DEVICE SIZE/OFF   NODE NAME
ecowitt2m 121259 root  cwd       DIR              179,2     4096      2 /
ecowitt2m 121259 root  rtd       DIR              179,2     4096      2 /
ecowitt2m 121259 root  txt       REG              179,2  5280744   1882 /usr/bin/python3.9
ecowitt2m 121259 root  mem       REG              179,2    15688  12246 /usr/lib/python3.9/lib-dynload/_multiprocessing.cpython-39-aarch64-linux-gnu.so
ecowitt2m 121259 root  mem       REG              179,2   192112   5267 /usr/lib/aarch64-linux-gnu/libmpdec.so.2.5.1
ecowitt2m 121259 root  mem       REG              179,2   163840  12240 /usr/lib/python3.9/lib-dynload/_decimal.cpython-39-aarch64-linux-gnu.so
ecowitt2m 121259 root  mem       REG              179,2    63376  12241 /usr/lib/python3.9/lib-dynload/_hashlib.cpython-39-aarch64-linux-gnu.so
ecowitt2m 121259 root  mem       REG              179,2    44568  12242 /usr/lib/python3.9/lib-dynload/_json.cpython-39-aarch64-linux-gnu.so
ecowitt2m 121259 root  mem       REG              179,2   350640 650642 /usr/local/lib/python3.9/dist-packages/Levenshtein/_levenshtein.cpython-39-aarch64-linux-gnu.so
ecowitt2m 121259 root  mem       REG              179,2  2127000   7468 /usr/local/lib/python3.9/dist-packages/_ruamel_yaml.cpython-39-aarch64-linux-gnu.so
ecowitt2m 121259 root  mem       REG              179,2    15304  12249 /usr/lib/python3.9/lib-dynload/_queue.cpython-39-aarch64-linux-gnu.so
ecowitt2m 121259 root  mem       REG              179,2    31592   2153 /usr/lib/aarch64-linux-gnu/librt-2.31.so
ecowitt2m 121259 root  mem       REG              179,2 11791880 650852 /usr/local/lib/python3.9/dist-packages/uvloop/loop.cpython-39-aarch64-linux-gnu.so
ecowitt2m 121259 root  mem       REG              179,2    30712   5387 /usr/lib/aarch64-linux-gnu/libuuid.so.1.3.0
ecowitt2m 121259 root  mem       REG              179,2     6240  12257 /usr/lib/python3.9/lib-dynload/_uuid.cpython-39-aarch64-linux-gnu.so
ecowitt2m 121259 root  mem       REG              179,2   154232   2100 /usr/lib/aarch64-linux-gnu/liblzma.so.5.2.5
ecowitt2m 121259 root  mem       REG              179,2    33144  12244 /usr/lib/python3.9/lib-dynload/_lzma.cpython-39-aarch64-linux-gnu.so
ecowitt2m 121259 root  mem       REG              179,2    70504   5115 /usr/lib/aarch64-linux-gnu/libbz2.so.1.0.4
ecowitt2m 121259 root  mem       REG              179,2    20032  12226 /usr/lib/python3.9/lib-dynload/_bz2.cpython-39-aarch64-linux-gnu.so
ecowitt2m 121259 root  mem       REG              179,2    62336  12225 /usr/lib/python3.9/lib-dynload/_asyncio.cpython-39-aarch64-linux-gnu.so
ecowitt2m 121259 root  mem       REG              179,2  2739952   8506 /usr/lib/aarch64-linux-gnu/libcrypto.so.1.1
ecowitt2m 121259 root  mem       REG              179,2   577176   8510 /usr/lib/aarch64-linux-gnu/libssl.so.1.1
ecowitt2m 121259 root  mem       REG              179,2   181184  12251 /usr/lib/python3.9/lib-dynload/_ssl.cpython-39-aarch64-linux-gnu.so
ecowitt2m 121259 root  mem       REG              179,2    51640   2147 /usr/lib/aarch64-linux-gnu/libnss_files-2.31.so
ecowitt2m 121259 root  mem       REG              179,2     6080  12233 /usr/lib/python3.9/lib-dynload/_contextvars.cpython-39-aarch64-linux-gnu.so
ecowitt2m 121259 root  mem       REG              179,2    10320  12247 /usr/lib/python3.9/lib-dynload/_opcode.cpython-39-aarch64-linux-gnu.so
ecowitt2m 121259 root  mem       REG              179,2  3041504   2580 /usr/lib/locale/locale-archive
ecowitt2m 121259 root  mem       REG              179,2  1458480   2140 /usr/lib/aarch64-linux-gnu/libc-2.31.so
ecowitt2m 121259 root  mem       REG              179,2   104824   5407 /usr/lib/aarch64-linux-gnu/libz.so.1.2.11
ecowitt2m 121259 root  mem       REG              179,2   161856   5161 /usr/lib/aarch64-linux-gnu/libexpat.so.1.6.12
ecowitt2m 121259 root  mem       REG              179,2   633000   2142 /usr/lib/aarch64-linux-gnu/libm-2.31.so
ecowitt2m 121259 root  mem       REG              179,2    14672   2155 /usr/lib/aarch64-linux-gnu/libutil-2.31.so
ecowitt2m 121259 root  mem       REG              179,2    14560   2141 /usr/lib/aarch64-linux-gnu/libdl-2.31.so
ecowitt2m 121259 root  mem       REG              179,2   160200   2151 /usr/lib/aarch64-linux-gnu/libpthread-2.31.so
ecowitt2m 121259 root  mem       REG              179,2   145352   2136 /usr/lib/aarch64-linux-gnu/ld-2.31.so
ecowitt2m 121259 root  mem       REG              179,2    27004   2448 /usr/lib/aarch64-linux-gnu/gconv/gconv-modules.cache
ecowitt2m 121259 root    0r      CHR                1,3      0t0      5 /dev/null
ecowitt2m 121259 root    1u     unix 0x00000000c8dfa60c      0t0 625905 type=STREAM
ecowitt2m 121259 root    2u     unix 0x00000000c8dfa60c      0t0 625905 type=STREAM
ecowitt2m 121259 root    3u  a_inode               0,13        0   7590 [eventpoll]
ecowitt2m 121259 root    4r     FIFO               0,12      0t0 625101 pipe
ecowitt2m 121259 root    5w     FIFO               0,12      0t0 625101 pipe
ecowitt2m 121259 root    6r     FIFO               0,12      0t0 625102 pipe
ecowitt2m 121259 root    7w     FIFO               0,12      0t0 625102 pipe
ecowitt2m 121259 root    8u  a_inode               0,13        0   7590 [eventfd]
ecowitt2m 121259 root    9u     unix 0x000000003d8f318e      0t0 625103 type=STREAM
ecowitt2m 121259 root   10u     unix 0x000000007373546a      0t0 625104 type=STREAM
ecowitt2m 121259 root   11u     IPv4             625114      0t0    TCP *:http-alt (LISTEN)
ecowitt2m 121259 root   12r      CHR                1,3      0t0      5 /dev/null
ecowitt2m 121259 root   13u     IPv4             630945      0t0    TCP 192.168.1.130:http-alt->192.168.1.138:55339 (CLOSE_WAIT)
ecowitt2m 121259 root   14u     IPv4             633219      0t0    TCP 192.168.1.130:55201->192.168.1.139:1883 (ESTABLISHED)

FYI
192.168.1.130:55201 (the RPi running the script
192.168.1.139:1883 (my mqtt broker)

it has been suggested by someone much more knowledgeable than me that the issue could be here:

https://github.com/sbtinstruments/asyncio-mqtt/blob/6b02071227635fa532698b55c5159755f4e411b2/asyncio_mqtt/client.py#L524

I am running the latest version asyncio-mqtt on the RPi.

anybody know why this resources becomes unavailable and chokes my RPi?

thanks!

@frederikaalund
Copy link
Collaborator

frederikaalund commented Jul 27, 2022

Hi Anto79-ops, thanks for opening this issue. Let me have a look. :)

First of all, thanks for the detailed report. That always helps. 👍 However, in order to really dig into this issue, I need a stack trace from within the Python-part of your executable.

Something like py-spy will do (also available through piwheels). Like your existing suite of tools, py-spy attaches to a running process. Specifically, I'd like to see py-spy top --pid <pid> and a couple of py-spy dump --pid <pid>. Just give me the full dumps of a busy-looping process, I'll dig through the data.

This way, we can connect the dots between the busy-looping syscalls seen in strace to the actual lines of python code that invoke said syscalls.

Edit: I reread your issue and saw that you already tried with py-spy. I'd like to see the stack trace dumps of that. 👍

~Frederik

@Anto79-ops
Copy link
Author

hi and thanks, @frederikaalund

why I forgot to include the py-spy file here, oops. This instance caught the high CPU (stuck) cpu that I speak of above, it said that it collected over 4 millions data points and 700 errors when I stopped it.

profile (1)

@frederikaalund
Copy link
Collaborator

Thanks! Maybe it's just me, but the SVG seems to have lost it's interactivity.

In any case, the output of py-spy dump is easier for me to parse.

@frederikaalund
Copy link
Collaborator

Alternatively, try to upload the SVG to a file sharing service. I think the SVG looses it's interactivity when uploaded directly as an image on GitHub. 👍

@bachya
Copy link
Contributor

bachya commented Jul 27, 2022

Hi @frederikaalund,

While @Anto79-ops gathers data, chiming in to say that I'm the owner of ecowitt2mqtt, the library he references. I'm subscribed to this issue and can provide input about how I'm using asyncio-mqtt at any point. Thanks for your help!

@Anto79-ops
Copy link
Author

Anto79-ops commented Jul 27, 2022

thanks @frederikaalund and @bachya! Try getting it from my google drive, here:

https://drive.google.com/file/d/1xxhPbsG7cMJY3ORv5DwnWSWy4c-PjcMv/view?usp=sharing

does the above work for you?

as for py-spy dump I can definitely try and record more detailed data for you. Can I simply record the data by typing this

py-spy dump --pid <pid>

once the stuck CPU happens or do I have to start recording before the issues happens? I ask because the pid of the process is unknown until I start the script, and then wait (hours to days) for the stuck to CPU to appear.

@frederikaalund
Copy link
Collaborator

Thank you. 👍 The SVG from your Google Drive link worked.

As for py-spy dump, yes it's fine to just attach it to a process after the fact. It basically just dumps the stack trace as of this instant. Therefore, take a couple of stack trace dumps (just to make sure that we hit something interesting).

I'll have a look at the SVG later.

@bachya Thanks! I'll let you know once I've had a closer look at the stack traces / SVGs. Hope you find asyncio-mqtt useful, btw. 👍

@bachya
Copy link
Contributor

bachya commented Jul 27, 2022

@bachya Thanks! I'll let you know once I've had a closer look at the stack traces / SVGs. Hope you find asyncio-mqtt useful, btw. 👍

Without any doubt – when I started my project, I thought I'd have to wade into the depths of paho-mqtt... It was a joy to find your library!

@frederikaalund
Copy link
Collaborator

frederikaalund commented Jul 27, 2022

@bachya Maybe you'll question that joy in a bit. 😅

strace analysis

[pid 121259] recvfrom(14, 0x7f8683aaa0, 1, 0, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable)

From the strace I gather that MQTT client loses connection with the broker. Once this happens, the asyncio_mqtt.Client instance is done for. You need to create a new one and reconnect. I'm afraid that ecowitt2mqtt tries to reuse the existing client instance instead of creating a new one. That leads to undefined behaviour.

Excerpt from topic.py:

async def async_publish(self, data: dict[str, DataValueType]) -> None:
    """Publish to MQTT."""
    ...
    try:
        async with self.client:  # <-- Same client for each call to this function.
            await self.client.publish(
                self.ecowitt.config.mqtt_topic, generate_mqtt_payload(data)
            )
    except MqttError as err:
        ...
    ...

The asyncio_mqtt.Client is single-use. Once disconnected, you should create a new instance. It may work now when no errors occur but this too may break in a future release.

I see that there are a lot of LOGGER.error/debug calls whenever an error occurs. That's really nice! @Anto79-ops, do you have the logs from a CPU-bound run? Preferrably with the DEBUG log level. I expect that you'll see repeated "Unable to publish payload: ..." messages. Try to verify/reject this.

Stack trace analysis from py-spy SVG.

The CPU-bound process spends its time within this stack trace:

  1. asyncio_mqtt.Client._on_socket_open at the client.loop_read() call
  2. paho's loop_read function line 1556.
  3. The innermost call goes to self._sock.recv(...).

This begs the questions:

  1. Why do we call _on_socket_open so frequently?
  2. Why does this stall the CPU?

It's easy to answer question 1: paho-mqtt calls the on_socket_open callback whenever the client connects.
So why does the client connect so often? That's because ecowitt2mqtt connects/disconnects the client for each call to publish. Now whether there is a good reason for this or not, I don't know. It does seem like an awful waste of resources to connect/disconnect that frequently. @bachya Why not just connect once during initialization? Maybe I'm missing something.

It's a bit more difficult to answer question 2. My best guess is that if the client disconnected gracefully, then the subsequent reconnect is light on CPU resources. If, however, the client disconnected with an error (like the strace suggests) then the subsequent connect may be more expensive. That's my best guess. The logs will tell the truth.

A general observation

I had a look at Server._async_post_data from ecowitt2mqtt:

for callback in self._device_payload_callbacks:
    if asyncio.iscoroutinefunction(callback):
        self._loop.create_task(callback(payload))  # type: ignore
    else:
        callback(payload)

When you use loop.create_task like this, you get "fire and forget" semantics. Nice and easy. Maybe a bit too easy. What happens if the callback raises an exception? Spoiler: You won't know until you implicitly await the task when the event loop closes. At that point, it's too late to react on, e.g., a disconnect error.

To avoid this, we always keep track on the task object returned from create_task and await it. It's cumbersome but necessary. That's why there are libraries like anyio to help us with it (if you can afford another dependency). In fact, anyio.TaskGroup is so useful, that something very similar is already on it's way to asyncio in a future version of Python.


Let me know if you guys have any questions to the above. 👍

EDIT: Grammar and typos.

@bachya
Copy link
Contributor

bachya commented Jul 27, 2022

The asyncio_mqtt.Client is single-use. Once disconnected, you should create a new instance. It may work now when no errors occur but this too may break in a future release.

Ah, okay – I didn't know that. I would expect that when the context manager ends, it closes everything nicely so the same object can be used again... Does something happen during Client __init__ that can't be re-done?

So why does the client connect so often? That's because ecowitt2mqtt connects/disconnects the client for each call to publish. Now whether there is a good reason for this or not, I don't know. It does seem like an awful waste of resources to connect/disconnect that frequently. @bachya Why not just connect once during initialization? Maybe I'm missing something.

I did this for a couple of reasons:

  1. I didn't know whether asyncio-mqtt or paho-mqtt already had reconnection logic.
  2. In place of creating my own reconnection logic, I felt that disconnect/reconnect per call wasn't overly expensive (especially since gateways might take a long time between publishes, and I didn't see a reason to keep the connection open).

(2) is ultimately irrelevant, but what about (1)? Do I need to implement my own reconnection logic?

When you use loop.create_task like this, you get "fire and forget" semantics. Nice and easy. Maybe a bit too easy. What happens if the callback raises an exception? Spoiler: You won't know until you implicitly await the task when the event loop closes. At that point, it's too late to react on, e.g., a disconnect error.

To avoid this, we always keep track on the task object returned from create_task and await it. It's cumbersome but necessary. That's why libraries like anyio to help us with it (if you can affort another dependency). In fact, anyio.TaskGroup is so useful, that something very similar is already on it's way to asyncio in a future version of Python.

Ah, great point – I can absolutely afford to include anyio. Will dig into that.

@bachya
Copy link
Contributor

bachya commented Jul 27, 2022

@frederikaalund One additional question re: ^^^. I noticed your advanced example uses an AsyncExitStack – if task grouping for cancellation is the primary concern here, is anyio really needed, or can I stick with the built-in primitives?

@frederikaalund
Copy link
Collaborator

frederikaalund commented Jul 27, 2022

Ah, okay – I didn't know that. I would expect that when the context manager ends, it closes everything nicely so the same object can be used again... Does something happen during Client init that can't be re-done?

In general, context managers are single-use unless otherwise specified. As for asyncio_mqtt.Client, I simply don't know if the current implementation is already reusable (in contrast to single-use). I doubt it (this issue itself indicates that the client is not reusable), but we need to test it to really find out. See #48 (and my comment). I'll be glad to review a PR on the matter. 👍

(2) is ultimately irrelevant, but what about (1)? Do I need to implement my own reconnection logic?

Yes, for now you would have to. My suggestion is to use a retry loop similar to that found in the advanced example in the readme file. For this to work, you must ensure that exceptions (e.g., due to network errors) propagate up to the retry loop. E.g., via an anyio.TaskGroup.

One additional question re: ^^^. I noticed your advanced example uses an AsyncExitStack – task grouping for cancellation is the primary concern here, is anyio really needed, or can I stick with the built-in primitives?

You can stick with the built-in primitives. 👍 That's what I do in asyncio-mqtt. That being said, if I had known about anyio (or structured concurrency in general) back when I created this library, then I would have used anyio. No doubt about that. anyio's task groups make everything much easier to reason about.

I stick to raw asyncio for now to maintain backwards compatibility (and avoid too many dependencies). There was a discussion about this in the past: #44.


Let me know if you two have any other questions. Also, I'd still like to see the log files from a CPU-bound run. Until then, the above is just speculation.

EDIT: Fixed link and typo.

@bachya
Copy link
Contributor

bachya commented Jul 27, 2022

Ah, okay – I didn't know that. I would expect that when the context manager ends, it closes everything nicely so the same object can be used again... Does something happen during Client init that can't be re-done?

In general, context managers are single-use unless otherwise specified. As for asyncio_mqtt.Client, I simply don't know if the current implementation is already reusable (in contrast to single-use). I doubt it (this issue itself indicates that the client is not reusable), but we need to test it to really find out. See #48 (and my comment). I'll be glad to review a PR on the matter. 👍

For what it's worth, it does "work" in that most users successfully publish multiple messages with the same client (re-entered). Whether that's correct practice (or causing issues under the surface) is obviously a different matter.

(2) is ultimately irrelevant, but what about (1)? Do I need to implement my own reconnection logic?

Yes, for now you would have to. My suggestion is to use a retry loop similar to that found in the advanced example in the readme file. For this to work, you must ensure that exceptions (e.g., due to network errors) propagate up to the retry loop. E.g., via an anyio.TaskGroup.

Yep, got it.

One additional question re: ^^^. I noticed your advanced example uses an AsyncExitStack – task grouping for cancellation is the primary concern here, is anyio really needed, or can I stick with the built-in primitives?

You can stick with the built-in primitives. 👍 That's what I do in asyncio-mqtt. That being said, if I had known about anyio (or structured concurrency in general) back when I created this library, then I would have used anyio. No doubt about that. anyio's task groups make everything much easier to reason about.

I stick to raw asyncio for now to maintain backwards compatibility (and avoid too many dependencies). There was a discussion about this in the past: #44.

Got it. I'm looking for less work, so I'll check out anyio. 😂 Thanks for the recommendation!

@Anto79-ops
Copy link
Author

Anto79-ops commented Jul 27, 2022

@frederikaalund I'd be happy to provide the logs, and as much as I enjoy troubleshooting, I'm not good at it.

What do I have to do generate or get those logs for you?

Thanks

@bachya
Copy link
Contributor

bachya commented Jul 28, 2022

FYI, digging in and keeping a single connection open won't work with ecowitt2mqtt's architecture – since we're a Uvicorn + FastAPI application, there's no feasible way to have a reconnect "loop" (similar to the docs) because we publish when we received REST API calls via FastAPI. Happy to go into more detail if interested, but more importantly, we'll need to connect/disconnect with each payload. If that's always going to spike CPU, I'm not certain we can do anything...

EDIT: I lied. 😂 bachya/ecowitt2mqtt#236

@Anto79-ops
Copy link
Author

With the new code developed by @bachya, this has solved the problem! thanks all for your input in this matter.

@frederikaalund
Copy link
Collaborator

Sorry about the silence—I was on a short vacation.

Glad that you figured it out. Feel free to open new issues/discussions/PRs if you find something in anyio-mqtt that you would like to add/change/fix. 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants