Very high CPU usage when running sslh
I asked on unix.stackexchange.com why this software was so prpcessor-intensive. This is the question and the very good answer I got, it seemsmto be a bug. Hope this helps. http://unix.stackexchange.com/questions/141337/port-multiplexer-sslh-why-is-it-so-resource-intensive
原文链接
http://cache.baiducontent.com/c?m=L9GBQ_Fobpn_5B3Nr1qpaxmt45fHLRKLunGBbe3BOsifDSkqWL6_HcqdKS3m-G2wBSMX7H0oNisU9D3aaKwhu3ztm3j64_mQStptitVO2QDuT60YQ3ycq2oFxiB1KUpk&p=8b2a975386cc46a905aac03f4464&newp=aa769a479fd52de608e29378570c92695803ed6936d6c44324b9d71fd325001c1b69e7b02d211b06d1c47e6001ab4b58e8fb3173321766dada9fca458ae7c46139e46e&s=6ea8e0d39116d1b6&user=baidu&fm=sc&query=sslh+cpu&qid=cf1e5bfb000857a2&p1=1
该提问来源于开源项目:yrutschle/sslh
Been a long time but just wanted to add that my case which I commented on back in 2014 was probably because sslh was feeding the connection back to itself, creating a loop since nothing was listening to port 443 except for sslh itself.
15条回答
默认最新
weixin_39587010
weixin_39587010
2021-01-07 15:58
On Thu, Jul 10, 2014 at 06:00:47AM -0700, giovi321 wrote:
I asked on unix.stackexchange.com why this software was so prpcessor-intensive. This is the question and the very good answer I got, it seemsmto be a bug. Hope this helps. http://unix.stackexchange.com/questions/141337/port-multiplexer-sslh-why-is-it-so-resource-intensive
Thanks for the heads up. The answer given on Stack Exchange is wrong: select() is designed to wait (NOT busy-wait) even on non-blocking sockets, and making sockets non-blocking is a fundamental requirement in that kind of event loop (otherwise DoS attacks become fairly easily possible).
So: there’s something else going on here, and 50% CPU is definitely not right (well, unless you had thousands of connections I suppose).
Are you using sslh-select of sslh-fork? Using top, which process is taking CPU time?
Cheers, Y.
点赞
评论
复制链接分享
weixin_39681621
weixin_39681621
2021-01-07 15:58
I had the same issue with sslh running in a standalone mode. However apache was not listening on port 443 which I guess somehow made problems with sslh. I kept getting hundreds of lines per second in /var/log/auth.log like this:
Sep 25 11:54:36 edge sslh[3527]: connection from x.x.x.x:56914 to edge.domain.com:https forwarded from localhost:48897 to localhost:https
So either remove the --ssl 127.0.0.1:443 directive from your /etc/default/sslh file or bring up apache (or another webserver) to listen to port 443.
Hope this helps!
点赞
评论
复制链接分享
weixin_39587010
weixin_39587010
2021-01-07 15:58
Hi,
On Thu, Sep 25, 2014 at 02:58:12AM -0700, tombii wrote:
I had the same issue with sslh running in a standalone mode. However apache was not listening on port 443 which I guess somehow made problems with sslh. So either remove the --ssl 127.0.0.1:443 directive from your /etc/default/sslh file or bring up apache (or another webserver) to listen to port 443.
If that’s the case, that’s a bug in sslh.
However I can’t reproduce this: sslh fails to connect to the remote SSL port (because no-one is listening), closes the connection, which in turns makes the inbound connection fail… all as expected.
Can you still reproduce this? With what version of sslh and what configuration?
Cheers, Y.
点赞
评论
复制链接分享
weixin_39714528
weixin_39714528
2021-01-07 15:58
Hi,
I know it’s pretty old, but I did hit the same problem:
-sslh running and having --ssl 127.0.0.1:443 directive -no local webserver running to listen on 443 - high CPU usage and increasing logfile auth.log (couple of GB / day) because of the error mentioned above, which was logged.
However, what I discovered is that the high CPU usage had nothing to do with sslh in particular, but rather with fail2ban. fail2ban is analyzing auth.log to catch fraudulent logins and to block them. As sslh was writing serveral lines per second on auth.log, fail2ban had a hard job to do, using the full CPU power to parse auth.log fail.
Removing the --ssl directive from sslh or bringing a nginx server up listening on 443 just for test, put an end to this problem.
Just my 2 cents on this.
点赞
评论
复制链接分享
weixin_39587010
weixin_39587010
2021-01-07 15:58
Hi,
Thanks for the feedback:
On Fri, Jul 24, 2015 at 04:14:55AM -0700, Calin Chiorean wrote:
-sslh running and having --ssl 127.0.0.1:443 directive -no local webserver running to listen on 443 - high CPU usage and increasing logfile auth.log (couple of GB / day) because of the error mentioned above, which was logged.
What message do you get in auth.log? I would expect sslh to only connect (and log) once for each incoming connection it tries to forward to localhost:443, not repeated logs.
Y.
点赞
评论
复制链接分享
weixin_39785970
weixin_39785970
2021-01-07 15:58
I had the same issue today. I am using sslh version v1.13b on Raspbian.
/var/log/auth.log kept flooding with lines like these:
…
Jul 27 14:29:33 lou sslh[15327]: connection from localhost:49854 to localhost:https forwarded from localhost:49855 to localhost:https
Jul 27 14:29:33 lou sslh[15327]: connection from localhost:49855 to localhost:https forwarded from localhost:49856 to localhost:https
Jul 27 14:29:33 lou sslh[15327]: connection from localhost:49856 to localhost:https forwarded from localhost:49857 to localhost:https
Jul 27 14:29:33 lou sslh[15327]: connection from localhost:49857 to localhost:https forwarded from localhost:49858 to localhost:https
Jul 27 14:29:33 lou sslh[15327]: connection from localhost:49858 to localhost:https forwarded from localhost:49859 to localhost:https
Jul 27 14:29:33 lou sslh[15327]: connection from localhost:49859 to localhost:https forwarded from localhost:49860 to localhost:https
…
I used --user sslh --listen 0.0.0.0:443 --ssh 127.0.0.1:22 --ssl 127.0.0.1:443 --pidfile /var/run/sslh/sslh.pid as options. It seems to be an infinite self redirect. TCP connections stay open.
I finally used --user sslh --listen X.Y.Z.X:443 --ssh 127.0.0.1:22 --ssl 127.0.0.1:443 --pidfile /var/run/sslh/sslh.pid (X.Y.Z.X beeing my external listening IP) that stopped this behaviour and I got “connection refused”
Additionally when I run sslh -F /etc/sslh.cfg -f with the basic.cfg I get a core dump.
点赞
评论
复制链接分享
weixin_39714528
weixin_39714528
2021-01-07 15:58
Hello
What message do you get in auth.log? I would expect sslh to only connect (and log) once for each incoming connection it tries to forward to localhost:443, not repeated logs.
This was also strange to me. I had no connection to 443 and I did a tcpdump on the interface filtering on tcp 443, just to be sure that somebody out there is not trying some brute force / DoS on the port. There was nothing.
If needed I can reproduce this scenario.
点赞
评论
复制链接分享
weixin_39587010
weixin_39587010
2021-01-07 15:58
On Mon, Jul 27, 2015 at 08:27:56AM -0700, Calin Chiorean wrote:
What message do you get in auth.log? I would expect sslh to only connect (and log) once for each incoming connection it tries to forward to localhost:443, not repeated logs.
This was also strange to me. I had no connection to 443 and I did a tcpdump on the interface filtering on tcp 443, just to be sure that somebody out there is not trying some brute force / DoS on the port. There was nothing.
If needed I can reproduce this scenario.
Yes please, I’d really be interested in getting to the bottom of this, if you can be bothered trying it.
Was it sslh-fork or sslh-select? With select, I guess it’s possible some error condition is not handled properly and results in one incoming connection remaining active. With fork, I have no theory.
If you could run with --verbose --foreground and send both the auth.log and the console output, that’d be of great help.
点赞
评论
复制链接分享
weixin_39714528
weixin_39714528
2021-01-07 15:58
Hi,
I have limited time right now, but here is what I did until now. Also some background info.
root:~# cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=14.04 DISTRIB_CODENAME=trusty DISTRIB_DESCRIPTION=“Ubuntu 14.04.2 LTS”
root:~# uname -a Linux ng 3.13.0-55-generic #94-Ubuntu SMP Thu Jun 18 00:27:10 UTC 2015 x86_64 x86_64 >x86_64 GNU/Linux
root:~# /usr/sbin/sslh -V sslh-fork 1.15
DAEMON_OPTS="–user sslh --listen 0.0.0.0:443 --ssh 127.0.0.1:22 --pidfile /var/run/sslh/sslh.pid"
root:~# tail /var/log/auth.log Jul 27 18:33:45 ng sshd[26232]: Connection closed by 202.28.77.170 [preauth] Jul 27 19:04:44 ng sshd[1564]: Invalid user nameserver from 202.28.77.170 Jul 27 19:04:44 ng sshd[1564]: input_userauth_request: invalid user nameserver [preauth] Jul 27 19:04:44 ng sshd[1564]: pam_unix(sshd:auth): check pass; user unknown Jul 27 19:04:44 ng sshd[1564]: pam_unix(sshd:auth): authentication failure; logname= uid=0 >euid=0 tty=ssh ruser= rhost=202.28.77.170 Jul 27 19:04:46 ng sshd[1564]: Failed password for invalid user nameserver from 202.28.77.170 >port 29840 ssh2 Jul 27 19:04:46 ng sshd[1564]: Connection closed by 202.28.77.170 [preauth]
CPU cpu
! ! After !
DAEMON_OPTS="–user sslh --listen 0.0.0.0:443 --ssh 127.0.0.1:22 --ssl 127.0.0.1:443 --pidfile >/var/run/sslh/sslh.pid"
Jul 27 19:41:34 ng sslh[4395]: connection from localhost:59299 to localhost:https forwarded from >localhost:59300 to localhost:https Jul 27 19:41:34 ng sslh[4395]: connection from localhost:59300 to localhost:https forwarded from >localhost:59301 to localhost:https Jul 27 19:41:34 ng sslh[4395]: connection from localhost:59301 to localhost:https forwarded from >localhost:59302 to localhost:https Jul 27 19:41:34 ng sslh[4395]: connection from localhost:59302 to localhost:https forwarded from >localhost:59303 to localhost:https Jul 27 19:41:34 ng sslh[4395]: connection from localhost:59303 to localhost:https forwarded from >localhost:59304 to localhost:https Jul 27 19:41:35 ng sslh[4395]: connection from localhost:59304 to localhost:https forwarded from >localhost:59305 to localhost:https Jul 27 19:41:35 ng sslh[4395]: connection from localhost:59305 to localhost:https forwarded from >localhost:59306 to localhost:https Jul 27 19:42:19 ng sslh[4395]: connection from localhost:59306 to localhost:https forwarded from >localhost:59307 to localhost:https
CPU
cpu1
After restarting the sslh with --ssl option, nothing was happening in the auth.log Then I did open a browser and enter https://IP and the auth.log started to flood with the above information. I did close the browser, check on the eth for tcp 443 (nothing was coming in) but the text flood on auth.log contiune until had to stop the sslh daemon.
As said before the CPU in my case was exhausted by the fail2ban process which had to parse the auth.log file…
I’ll update when I have a bit more time.
点赞
评论
复制链接分享
weixin_39587010
weixin_39587010
2021-01-07 15:58
On Mon, Jul 27, 2015 at 09:52:32AM -0700, Calin Chiorean wrote:
DAEMON_OPTS="–user sslh --listen 0.0.0.0:443 --ssh 127.0.0.1:22 --ssl 127.0.0.1:443 --pidfile >/var/run/sslh/sslh.pid"
Ok, same issue that Florian reported above: sslh listens to all interfaces (0.0.0.0:443), so when browser connects to port 443, sslh forwards to 127.0.0.1:443, which… sslh is listening on.
Jul 27 19:41:34 ng sslh[4395]: connection from localhost:59299 to localhost:https forwarded from >localhost:59300 to localhost:https Jul 27 19:41:34 ng sslh[4395]: connection from localhost:59300 to localhost:https forwarded from >localhost:59301 to localhost:https Jul 27 19:41:34 ng sslh[4395]: connection from localhost:59301 to localhost:https forwarded from >localhost:59302 to localhost:https Jul 27 19:41:34 ng sslh[4395]: connection from localhost:59302 to localhost:https forwarded from >localhost:59303 to localhost:https Jul 27 19:41:34 ng sslh[4395]: connection from localhost:59303 to localhost:https forwarded from >localhost:59304 to localhost:https Jul 27 19:41:35 ng sslh[4395]: connection from localhost:59304 to localhost:https forwarded from >localhost:59305 to localhost:https Jul 27 19:41:35 ng sslh[4395]: connection from localhost:59305 to localhost:https forwarded from >localhost:59306 to localhost:https Jul 27 19:42:19 ng sslh[4395]: connection from localhost:59306 to localhost:https forwarded from >localhost:59307 to localhost:https
And then it connects from port 59300, sslh forwards 59300 to 59301, then 59301 to 59302, and so on.
I’ll try and think if it’s possible for sslh to detect that the configuration will create such a loop, or at least add a caveat in the documentation.
Thanks for your time!
点赞
评论
复制链接分享
weixin_39897746
weixin_39897746
2021-01-07 15:58
Can confirm the issue still exists. Personally I just changed the port that Nginx listened for SSL on to 444, and it works absolutely fine (obviously because no looping can occur). Was quite a shock seeing my syslog at 2GB