High CPU usage per tab in Ghostty on Manjaro Linux #3658
-
I compiled the current version of Ghostty with Ghostty 1.0.1-dev+0000000
Version
- version: 1.0.1-dev+0000000
- channel: tip
Build Config
- Zig version: 0.13.0
- build mode : builtin.OptimizeMode.ReleaseFast
- app runtime: apprt.Runtime.gtk
- font engine: font.main.Backend.fontconfig_freetype
- renderer : renderer.OpenGL
- libxev : main.Backend.io_uring
- GTK version:
build : 4.14.5
runtime : 4.14.5
- libadwaita : enabled
build : 1.5.3
runtime : 1.5.3 The problem I am presenting is that Ghostty is using 100% of each processor with every tab open, with each new tab, the CPU usage spikes to 100% for that additional core. I am using Manjaro Linux, and I am attaching the relevant info from System:
Kernel: 5.15.165-1-MANJARO arch: x86_64 bits: 64 compiler: gcc v: 14.2.1
clocksource: tsc avail: hpet,acpi_pm
parameters: BOOT_IMAGE=/@/boot/vmlinuz-5.15-x86_64
root=UUID=a9cbf476-db13-43bc-a095-49aa2591e826 rw rootflags=subvol=@ quiet
udev.log_priority=3
Desktop: Cinnamon v: 6.2.9 tk: GTK v: 3.24.43 wm: Muffin v: 6.2.0 tools:
avail: cinnamon-screensaver dm: LightDM v: 1.32.0 Distro: Manjaro
base: Arch Linux
CPU:
Info: model: Intel Core i7-6700HQ socket: U3E1 bits: 64 type: MT MCP
arch: Skylake-S gen: core 6 level: v3 note: check built: 2015
process: Intel 14nm family: 6 model-id: 0x5E (94) stepping: 3
microcode: 0xF0
Topology: cpus: 1x dies: 1 cores: 4 threads: 8 tpc: 2 smt: enabled cache:
L1: 256 KiB desc: d-4x32 KiB; i-4x32 KiB L2: 1024 KiB desc: 4x256 KiB
L3: 6 MiB desc: 1x6 MiB
Speed (MHz): avg: 2705 min/max: 800/3500 base/boost: 3100/8300 scaling:
driver: intel_pstate governor: powersave volts: 1.1 V ext-clock: 100 MHz
cores: 1: 2705 2: 2705 3: 2705 4: 2705 5: 2705 6: 2705 7: 2705 8: 2705
bogomips: 41621
Flags: avx avx2 ht lm nx pae sse sse2 sse3 sse4_1 sse4_2 ssse3 vmx
Vulnerabilities:
Type: gather_data_sampling status: Vulnerable: No microcode
Type: itlb_multihit status: KVM: VMX disabled
Type: l1tf mitigation: PTE Inversion; VMX: conditional cache flushes, SMT
vulnerable
Type: mds mitigation: Clear CPU buffers; SMT vulnerable
Type: meltdown mitigation: PTI
Type: mmio_stale_data mitigation: Clear CPU buffers; SMT vulnerable
Type: reg_file_data_sampling status: Not affected
Type: retbleed mitigation: IBRS
Type: spec_rstack_overflow status: Not affected
Type: spec_store_bypass mitigation: Speculative Store Bypass disabled via
prctl and seccomp
Type: spectre_v1 mitigation: usercopy/swapgs barriers and __user pointer
sanitization
Type: spectre_v2 mitigation: IBRS; IBPB: conditional; STIBP: conditional;
RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected
Type: srbds mitigation: Microcode
Type: tsx_async_abort mitigation: TSX disabled
Graphics:
Device-1: Intel HD Graphics 530 vendor: CLEVO/KAPOK driver: i915 v: kernel
arch: Gen-9 process: Intel 14n built: 2015-16 ports: active: eDP-1
empty: none bus-ID: 00:02.0 chip-ID: 8086:191b class-ID: 0300
Device-2: NVIDIA GM107M [GeForce GTX 960M] vendor: CLEVO/KAPOK
driver: nvidia v: 550.107.02 alternate: nouveau,nvidia_drm non-free: 550.xx+
status: current (as of 2024-09; EOL~2026-12-xx) arch: Maxwell code: GMxxx
process: TSMC 28nm built: 2014-2019 pcie: gen: 2 speed: 5 GT/s lanes: 8
link-max: gen: 3 speed: 8 GT/s lanes: 16 ports: active: none
empty: DP-1,DP-2,HDMI-A-1 bus-ID: 01:00.0 chip-ID: 10de:139b class-ID: 0300
Display: x11 server: X.org v: 1.21.1.13 with: Xwayland v: 24.1.2 driver: X:
loaded: modesetting,nvidia gpu: i915 display-ID: :0 screens: 1
Screen-1: 0 s-res: 1920x1080 s-size: <missing: xdpyinfo>
Monitor-1: eDP-1 mapped: eDP-1-1 model: LG Display 0x046f built: 2014
res: 1920x1080 hz: 60 dpi: 142 gamma: 1.2 size: 344x194mm (13.54x7.64")
diag: 395mm (15.5") ratio: 16:9 modes: 1920x1080
API: EGL v: 1.5 hw: drv: intel iris drv: nvidia platforms: device: 0
drv: nvidia device: 2 drv: iris device: 3 drv: swrast gbm: drv: nvidia
surfaceless: drv: nvidia x11: drv: nvidia inactive: wayland,device-1
API: OpenGL v: 4.6.0 compat-v: 4.5 vendor: nvidia mesa v: 550.107.02
glx-v: 1.4 direct-render: yes renderer: NVIDIA GeForce GTX 960M/PCIe/SSE2
memory: 1.95 GiB
API: Vulkan v: 1.3.279 layers: 1 device: 0 type: discrete-gpu
name: NVIDIA GeForce GTX 960M driver: nvidia v: 550.107.02
device-ID: 10de:139b surfaces: xcb,xlib device: 1 type: integrated-gpu
name: Intel HD Graphics 530 (SKL GT2) driver: mesa intel v: 24.1.6-arch1.1
device-ID: 8086:191b surfaces: xcb,xlib
Swap:
Kernel: swappiness: 60 (default) cache-pressure: 100 (default) zswap: yes
compressor: zstd max-pool: 20%
ID-1: swap-1 type: file size: 512 MiB used: 488.4 MiB (95.4%) priority: -2
file: /swap/swapfile
Info:
Memory: total: 32 GiB available: 31.16 GiB used: 11.78 GiB (37.8%)
igpu: 128 MiB
Processes: 457 Power: uptime: 114d 12h 35m states: freeze,mem,disk
suspend: deep avail: s2idle wakeups: 258 fails: 2 hibernate: platform
avail: shutdown, reboot, suspend, test_resume image: 12.46 GiB
services: csd-power,upowerd Init: systemd v: 256 default: graphical
tool: systemctl |
Beta Was this translation helpful? Give feedback.
Replies: 4 comments 6 replies
-
I dont think tabs are supposed to get too many unique resources but can you try setting |
Beta Was this translation helpful? Give feedback.
-
Maybe it's related to #3224. |
Beta Was this translation helpful? Give feedback.
-
Just wanted to help. I have been using ghostty on different linux laptops for a few days. It is nice and works flawlessly. On my chromebook (debian 12), I have the high CPU problem. I have tried the .deb install the v1.0.1 and v1.0.0 build from sources. I see the strange behavior. I did a small screencast to illustrate https://screencast.apps.chrome/1-MjA6K6nhMgpyFAp0fFtp8G53kCfBO3g?createdTime=2025-01-03T12%3A51%3A08.447Z I hope it helps. It looks like software rendering vs. hardware video rendering, still searching… I applied the "Epol" proposal, it is way better. But I am still a bit concern. For me, it is still doing too much cpu. But it is useable now. Compare to a "native" debian 12 (on old slower laptop) with normal cpu usage: https://youtu.be/i0-Jh3BXbMg. P. |
Beta Was this translation helpful? Give feedback.
-
I meet same situation in my openSUSE slowroll. I am using opensuse-slowroll with most recent update, KDE destop environment. I installed ghostty from prebuild binary for openSUSE following this instruction: https://ghostty.org/docs/install/binary#opensuse, and I noticed when I start ghostty, the CPU monitor applet shows servel high CPU usage, when I add a new tab, the CPU bar in applet synchronization increased. After I quit ghostty, the CPU bar resumed to normal.
|
Beta Was this translation helpful? Give feedback.
Maybe it's related to #3224.